Category Archives: Political Theory

Criticizing Government Was What They Knew How To Do

 

Paul Sabin, Public Citizen:

The Attack on Big Government and the Remaking of American Liberalism

(W.W. Norton & Co., 2021)

1965 marked the highpoint for Democratic President Lyndon Johnson’s Great Society program, an ambitious set of policy and legislative initiatives which envisioned using the machinery of the federal government to alleviate poverty, combat racial injustice and address other pressing national needs.  Johnson was coming off a landslide victory in the November 1964 presidential election, having carried 44 states and the District of Columbia with the highest percentage of the popular vote of any presidential candidate in over a century.  Yet a decade and a half later, in January 1981, Republican Ronald Reagan, after soundly defeating Democratic incumbent Jimmy Carter,  took the presidential oath of office declaring “government is not the solution, it is the problem.”

How did government in the United States go in a fifteen-year period from being the solution to society’s ills to the cause of its problems?  How, for that matter, did the Democratic Party go from dominating the national political debate up through the mid-1960s to surrendering the White House to a former actor who had been considered too extreme to be a viable presidential candidate?  These are questions Yale University professor Paul Sabin poses at the outset of his absorbing Public Citizens: The Attack on Big Government and the Remaking of American Liberalism.  Focusing on the fifteen-year period 1965-1980, Sabin proffers answers centered on Ralph Nader and the “public interest” movement which Nader spawned.

1965 was also the year Nader rocketed to national prominence with his assault on automobile safety, Unsafe at Any Speed.  General Motors notoriously assisted Nader in his rise by conducting a concerted campaign to harass the previously obscure author.  From there, Nader and the lawyers and activists in his movement – often called “Nader’s Raiders” — turned to such matters as environmentalism, consumer safety and consumer rights, arguing that the government agencies charged with regulating these matters invariably came to be captured by the very industries they were designed to regulate, without the voice of the consumer or end user being heard.  “Why has business been able to boss around the umpire” (p.86) was one of Nader’s favorite rhetorical questions.

Because of both industry influence and bureaucratic ineffectiveness, government regulatory authority operated in the public interest only when pushed and prodded from the outside, Nader reasoned.  In Nader’s world, moreover, the Democratic and Republican parties were two sides of the same corrupt coin, indistinguishable in the degree to which they were both beholden to special corporate interests — “Tweddle Dee and Tweddle Dum,” as he liked to put it.

Reagan viewed government regulation from an altogether different angle.  Whereas Nader believed that government, through effective regulation of the private sector, could help make consumer goods safer, and air and water cleaner, Reagan sought to liberate the private sector from regulation.  He championed a market-oriented capitalism designed to “undermine, rather than invigorate, federal oversight” (p.167).  Yet, Sabin’s broadest argument is that Nader’s insistence over the course of a decade and a half that federal agencies used their powers for “nefarious and destructive purposes” (p.167) — — the “attack on big government” portion of his  title – rendered plausible Reagan’s superficially similar attack.

The “remaking of American liberalism” portion of Sabin’s sub-title might have better been termed “unmaking,” specifically the unmaking of the political liberalism rooted in Franklin Roosevelt’s New Deal – the liberalism which Johnson sought to emulate and build upon in his Great Society, based on a strong and active federal government. Following in the New Deal tradition, Roosevelt’s Democratic party controlled the White House for all but eight years between 1933 and 1969.  Yet, when Reagan assumed the presidency in 1981, New Deal liberalism had clearly surrendered its claim to national dominance.

Most interpretations of how and why New Deal liberalism lost its clout are rooted in the 1960s, with the decade’s anti-Vietnam war and Civil Rights movements as the principal actors.  The Vietnam war separated older blue-collar Democrats, who often saw the war in the same patriotic terms as World War II, from a younger generation of anti-war activists who perceived no genuine US interests in the conflict and no meaningful difference in defense and foreign policy between Democrats and Republicans.  The Civil Rights movement witnessed the defection of millions of white Democrats, unenthusiastic about the party’s endorsement of full equality for African Americans, to the Republican Party.

Nader and the young activists following him were also “radicalized by the historical events of the 1960s, particularly the civil rights movement and the Vietnam War” (p. p.48), Sabin writes.  These were their “defining issues,” shaping “their view of the government and their ambitions for their own lives” (p.51).   We cannot imagine Nader’s movement “emerging in the form that it did separate from civil rights and the war” (p.48).  But by elaborating upon the role of the public interest movement in the breakdown of New Deal liberalism and giving more attention to the 1970s, Sabin adds nuance to conventional interpretations of that breakdown.

The enigmatic Nader is the central figure in Sabin’s narrative.  Much of the book analyzes how Nader and his public interest movement interacted with the administrations of Lyndon Johnson, Richard Nixon, Gerald Ford, and Jimmy Carter, along with brief treatment of the Reagan presidency and that of Bill Clinton.  The Carter years, 1977-1981, revealed the public interest movement’s most glaring weakness: its “inability to come to terms with the compromises inherent in running the executive branch” (p.142), as Sabin artfully puts it.

Carter was elected in 1976, when the stain of the Watergate affair and the 1974 resignation of Richard Nixon hovered over American politics, with trust in government at a low point.  Carter believed in making government regulation more efficient and effective, which he saw as a means of rebuilding public trust.   Yet, he failed to craft what Sabin terms a “new liberalism” that could “champion federal action while also recognizing government’s flaws and limitations” (p.156).

That failure was due in no small measure to frequent and harsh criticism emanating from public interest advocates, whose critique of the Carter administration, Sabin writes, “held those in power up against a model of what they might be, rather than what the push and pull of political compromise and struggle allowed” (p.160).  Criticizing government power was “what they knew how to do, and it was the role that they had defined for themselves”  (p.156). Metaphorically, it was “as if liberals took a bicycle apart to fix it but never quite figured out how to get it running again” (p.xvii).

 * * *

Sabin starts by laying out the general parameters of New Deal liberalism: a technocratic faith that newly created administrative agencies and the bureaucrats leading them would act in the public interest by serving as a counterpoint to the power of private, especially corporate, interests.  By the mid-1950s, the liberal New Deal conception of “managed capitalism” had evolved into a model based on what prominent economist John Kenneth Galbraith termed “countervailing powers,” in which large corporations, held in balance by the federal regulatory state, “would check each other’s excesses through competition, and powerful unions would represent the interests of workers.  Government would play a crucial role, ensuring that the system did not tilt too far in one direction or the other” (p.7-8).

Nader’s public interest movement was built around a rejection of Galbraith’s countervailing power model.  The model failed to account for the interests of consumers and end users, as the economist himself admitted later in his career.  If there was to be a countervailing power, Nader theorized, it would have to come through the creation of “independent, nonbureaucratic, citizen-led organizations that existed somewhat outside the traditional American power structure” (p.59).  Only such organizations provided the means to keep power “insecure” (p.59), as Nader liked to say.

Nader’s vision could be described broadly as “ensuring safety in every setting where Americans might find themselves: workplace, home, doctor’s office, highway, or just outside, breathing the air”  (p.36).  In a 1969 essay in the Nation, Nader termed car crashes, workplace accidents, and diseases the “primary forms of violence that threatened Americans” (p.75), far exceeding street crime and urban unrest.  For Nader, environmental and consumer threats revealed the “pervasive failures and corruption of American industry and government” (p.76).

Nader was no collectivist, neither a socialist nor a New Dealer.  He emphasized open and competitive markets, small private businesses, and especially an activated citizenry — the “public citizens” of his title.  More than any peer, Nader sought to “create institutions that would mobilize and nurture other citizen activists” (p.35).  To that end, Nader founded dozens of public interest organizations, which were able to attract idealistic young people — lawyers, engineers, scientists, and others, overwhelmingly white, largely male — to dedicate their early careers to opposing the “powerful alliance between business and government” (p.24).

Nader envisioned citizen-led public interest organizations serving as a counterbalance not only to business and government but also to labor.  Although Nader believed in the power of unions to represent workers, he was “deeply skeptical that union leaders would be reliable agents for progressive reform”  (p.59).  Union bosses in Nader’s view “too often positioned themselves as partners with industry and government, striking bargains that yielded economic growth, higher wages, and unions jobs at the expense of the health and well-being of workers, communities, and the environment” (p.59).   Nader therefore “forcefully attacked the unions for not doing enough to protect worker safety and health or to allow worker participation in governance” (p.64).

Nader‘s Unsafe at Any Speed was modeled after Rachel Carson’s groundbreaking environmental tract Silent Spring, to the point that it was termed the “Silent Spring of traffic safety”  (p.23).  Nader’s auto safety advocacy, Sabin writes, emerged from “some of the same wellsprings as the environmental movement, part of an increasingly shared postwar concern about the harmful and insidious impacts of new technologies and processes” (p.23).  In 1966, a year after publication of Unsafe at Any Speed. Congress passed two landmark pieces of legislation, the Traffic Safety Act and the Highway Safety Act, which forced manufacturers to design safer cars and pressed states to carry out highway safety programs.  Nader then branched out beyond auto safety to tackle issues like meat inspection, natural-gas pipelines, and radiation safety.

Paradoxically, the Nixon years were among the most fruitful for Nader and the public interest movement.  Ostensibly pro-business and friendly with blue-collar Democrats, Nixon presided over a breathtaking expansion of federal regulatory authority until his presidency was pretermitted by the Watergate affair.  The Environmental Protection Agency was created in 1970, consolidating several smaller federal units.  New legislation which Nixon signed regulated air and water pollution, energy production, endangered species, toxic substances, and land use — “virtually every sector of the US economy” (p.114), Sabin writes.

The key characteristics of Nader-influenced legislation were deadlines and detailed mandates, along with authority for citizen suits and judicial review, a clear break from earlier regulatory strategies.  The tough legislation signaled a “profound and pervasive distrust of government even as it expanded federal regulatory powers” (p.82).   Nader and the public interest movement went after Democrats in Congress with a fervor at least equal to that with which they attacked Republican-led regulatory agencies.  Nader believed that “you didn’t attack your enemy if you wanted to accomplish something, you attacked your friend”  (p.82).

In the early 1970s, the public interest movement targeted Democratic Maine Senator Edmund Muskie, the party’s nominee for Vice-President in 1968, whose support for the environmental movement had earned him the moniker “Mr. Pollution Control.” Declaring his environmental halo unwarranted, the movement sought to take down a man who clearly wanted to ride the environmental issue to the White House.  Nader’s group also went after long-time liberal Democrat Jennings Randolph of West Virginia over coal-mining health and safety regulations.  The adversarial posture toward everyone in power, Democrat as well as Republican, continued into the short interim administration of Gerald Ford, who assumed the presidency in the wake of the Watergate scandal.  And it continued unabated during the administration of Jimmy Carter.

As the Democratic nominee for president, Carter had conferred with Nader during the 1976 campaign and thought he had the support of the public interest movement when he entered the White House in January 1977.  Many members of the movement took positions in the new administration, where they could shape the agencies they had been pressuring.  The new president sought to incorporate the public interest movement’s critiques of government into a “positive vision for government reform,” promoting regulatory approaches that “cut cost and red tape without sacrificing legitimate regulatory goals” (p.186).

Hoping to introduce more flexible regulatory strategies that could achieve environmental and health protection goals at lower economic cost, Carter sacrificed valuable political capital by clashing with powerful congressional Democrats over wasteful and environmentally destructive federal projects. Yet, public interest advocates faulted Carter for his purported lack of will more than they credited him for sacrificing his political capital for their causes.  They saw the administration’s questioning of regulatory costs and the redesign of government programs as “simply ways to undermine those agencies.” (p.154).   Their lack of enthusiasm for Carter severely undermined his reelection bid in the 1980 campaign against Ronald Reagan.

Reagan’s victory “definitively marked the end of the New Deal liberal period, during which Americans had optimistically looked to the federal government for solutions” (p.165), Sabin observes.  Reagan and his advisors “vocally rejected, and distanced themselves from, Carter’s nuanced approach to regulation”  (p.172). To his critics, Reagan appeared to be “trying to shut down the government’s regulatory apparatus” (p.173).

But in considering the demise of New Deal liberalism, Sabin persuasively demonstrates that the focus on Reagan overlooks how the post-World War II administrative state “lost its footing during the 1970s” (p.165).    The attack on the New Deal regulatory state that culminated in Reagan’s election, usually attributed to a rising conservative movement, was also “driven by an ascendant liberal public interest movement” (p.166).   Sabin’s bottom line: blaming conservatives alone for the end of the New Deal is “far too simplistic” (p.165).

* * *

Sabin mentions Nader’s 2000 presidential run on the Green Party ticket only at the end and only in passing.  Although the Nader-inspired public interest movement had wound down by then, Nader gained widespread notoriety that year when he gathered about 95,000 votes in Florida, a state which Democratic nominee Al Gore lost officially by 537 votes out of roughly six million cast (with no small amount of assistance from a controversial 5-4 Supreme Court decision).  Nader’s entire career had been a rebellion against the Democratic Party in all its iterations, and his quixotic run in 2000 demonstrated that he had not outgrown that rebellion.  His presidential campaign took his “lifelong criticism of establishment liberalism to its logical extreme” (p.192).

Thomas H. Peebles

Paris, France

May 13, 2022

 

5 Comments

Filed under American Politics, Political Theory, Politics, United States History

Looking at the Arab Spring Through the Lens of Political Theory

Noah Feldman, The Arab Winter: A Tragedy

(Princeton University Press)

2011 was the year of the upheaval known as the “Arab Spring,” a time when much of the Arabic-speaking world seemed to have embarked on a path toward democracy—or at least a path away from authoritarian government. The upheaval began in December 2010, when a twenty-six-year-old Tunisian street fruit vendor, Mohamed Bouazizi, distraught over confiscation of his cart and scales by municipal authorities, ostensibly because he lacked a required work permit, doused his body with gasoline and burned himself.  Protests began almost immediately after Bouazizi’s self-immolation, aimed at Tunisia’s autocratic ruler since 1987 Zine El Abidine Ben Ali.  On 14 January 2011, Ben Ali­­, who had fled to Saudi Arabia, resigned.

One month later, Hosni Mubarak­, Egypt’s strongman president since 1981, resigned his office. By that time, protests against ruling autocrats had broken out in Libya and in Yemen. In March, similar protests began in Syria. By year’s end, Yemen’s out-of-touch leader, Ali Abdullah Saleh, had been forced to resign, and Colonel Muammar Qaddafi—who had ruled Libya since 1969—was driven from office and shot by rebels. Only Syria’s Bashar al-Assad still clung to power, but his days, too, appeared numbered.

The stupefying departures in a single calendar year of four of the Arab world’s seemingly most firmly entrenched autocrats sent soaring the hopes of many, including the present writer.  Finally, we said, at last—at long, long last—democracy had broken through in the Middle East. The era of dictators and despots was over in that part of the world, or so we allowed ourselves to think. It did not seem far-fetched to compare 2011 to 1989, when the Berlin Wall fell and countries across Central and Eastern Europe were suddenly out from under Soviet domination.

But as we know now, ten years later, 2011 was no 1989: the euphoria and sheer giddiness of that year turned to despair.  Egypt’s democratically elected president Mohamed Morsi was replaced in 2013 by a military government that seems at least as ruthlessly autocratic as that of Mubarak.  Syria broke apart in an apparently unending civil war that continues to this day, with Assad holding onto power amidst one of the twenty-first century’s most severe migrant and humanitarian crises.  Yemen and Libya appear to be ruled, if at all, by tribal militias and gangs, conspicuously lacking stabilizing institutions that might hold the countries together.  Only Tunisia offers cautious hope of a democratic future. And hovering over the entire region is the threat of brutal terrorism, represented most terrifyingly by the self-styled Islamic State in Iraq and Syria, ISIS.

It is easy, therefore, almost inescapable, to write off the Arab Spring as a failure—to saddle it with what Harvard Law School professor Noah Feldman terms a “verdict of implicit nonexistence” (p.x), as he phrases it in The Arab Winter: A Tragedy.  But Feldman, a seasoned scholar of the Arabic-speaking world, would like us to look beyond notions of failure and implicit nonexistence to consider the Arab spring and its aftermath from the perspective of classical political theory.  Rather than emphasizing chronology and causation, as historians might, political theorists—the “philosophers who make it their business to talk about government” (p.8) —ask a normative question: what is the right way to govern? Looking at the events of 2011 and their aftermath from this perspective, Feldman hopes to change our “overall sense of what the Arab spring meant and what the Arab winter portends” (p.xxi).

In this compact but rigorously analytical volume, Feldman considers how some of the most basic notions of democratic governance—political self-determination, popular sovereignty, political agency, and the nature of political freedom and responsibility—played out over the course of the Arab Spring and its bleak aftermath, the “Arab Winter” of his title.   Feldman focuses specifically on Egypt, Tunisia, Syria, and ISIS, each meriting a separate chapter, with Libya and Yemen mentioned intermittently.  In an introductory chapter, he addresses the Arab Spring collectively, highlighting factors common to the individual countries that experienced the events of the Arab Spring and ensuing “winter.”  In each country, those events took place within a framework defined by  “political action that was in an important sense autonomous” (p.xiii).  

The Arab Spring marked a crucial, historical break from the era in which empires—Ottoman, European and American—were the primary arbiters of Arab politics.  The “central political meaning” of the Arab Spring and its aftermath, Feldman argues, is that it “featured Arabic-speaking people acting essentially on their own, as full-fledged, independent makers of their own history and of global history more broadly” (p.xii).  The forces arrayed against those seeking to end autocracy in their countries were also Arab forces, “not empires or imperial proxies” (p.xii).  Many of the events of the Arab Spring were nonetheless connected to the decline of empire in the region, especially in the aftermath of the two wars fought in Iraq in 1991 and 2003.  The “failure and retreat of the U.S. imperial presence” was an “important condition in setting the circumstances for self-determination to emerge” (p.41).  

 While the massive protests against existing regimes that erupted in Tunisia, Egypt, Syrian, Libya, and Yemen in the early months of 2011 were calls for change in the protesters’ own nation-states, there was also a broader if somewhat vague sense of trans-national Arab solidarity to the cascading calls for change.  By “self-consciously echoing the claims of other Arabic-speaking protestors in other countries,” Feldman argues, the protesters were “suggesting that a broader people—implicitly the Arab people or peoples —were seeking change from the regime or regimes . . . that were governing them” (p.2). The constituent peoples of a broader trans-national Arab “nation” were rising, “not precisely together but also not precisely separately” (p.29).  

 The early-2011 protests were based on the claim that “the people” were asserting their right to take power from the existing government and reassign it, a claim that to Feldman “sounds very much like the theory of the right to self-determination” (p.11).  The historian and the sociologist would immediately ask who was making this “grand claim on behalf of the ‘people” (p.11).  But to the political theorist, the most pressing question is “whether the claim was legitimate and correct” (p.11).   Feldman finds the answer in John Locke’s Second Treatise of Government, first published in 1689. Democratic political theory since the Second Treatise has strongly supported the idea that the people of a constituent state may legitimately seize power from unjust and undemocratic rulers. Such an exercise of what could be termed the right to revolution is “very close to the central pillar of democratic theory itself” (p.11).   Legitimate government “originates in the consent of the governed;” a government not derived from consent “loses its legitimacy and may justifiably be replaced” (p.12).  The Egypt of the Arab Spring provides one of recent-history’s most provocative applications of the Lockean right to self-determination. 

* * *

Can a people which opted for constitutional democracy through a legitimate exercise of its political will opt to end democracy through a similarly legitimate exercise of its political will?  Can a democracy vote itself out of existence?  In his chapter on Egypt, Feldman concludes that the answer to these existential questions of political theory is yes, a conclusion that he characterizes as “painful” (p.59).  Just as massive and legitimate protests in Cairo’s Tahrir Square in January 2011 paved the way for forcing out aging autocrat Hosni Mubarak, so too did massive and legitimate protests in the same Tahrir Square in June 2013 pave the way for forcing out democratically-elected president Mohamed Morsi.

Morsi was a member of the Muslim Brotherhood—a movement banned under Mubarak that aspired to a legal order frequently termed “Islamism,” based upon Sharia Law and the primacy of the Islamic Quran.  Morsi won the presidency in June 2012 by a narrow margin over a military-affiliated candidate, but was unsuccessful almost from the beginning of his term.  In Feldman’s view, his most fatal error was that he never developed a sense of a need to compromise.  “If the people willed the end of the Mubarak regime, the people also willed the end of the Morsi regime just two and a half years later” (p.59),  he contends. The Egyptian people rejected constitutional democracy, “grandly, publicly, and in an exercise of democratic will” (p.24).  While they may have committed an “historical error of the greatest consequence by repudiating their own democratic process,” that was the “choice the Egyptian people made” (p.63).

Unlike in Egypt, in Tunisia the will of the people—what Feldman terms “political agency”—produced what then appeared to be a sustainable if fragile democratic structure.  Tunisia succeeded because its citizens from across the political spectrum “exercised not only political agency but also political responsibility” (p.130).  Tunisian protesters, activists, civil society leaders, politicians, and voters all “realized that they must take into account the probable consequences of each step of their decision making” (p.130).  

Moving the country toward compromise were two older politicians from opposite ends of the political spectrum: seventy-two-year-old Rached Ghannouchi, representing Ennahda—an Islamist party with ties to the Egyptian Muslim Brotherhood—and Beji Caid Essebsi, then eighty-five, a rigorous secularist with an extensive record of government service.  Together, the two men led a redrafting of Tunisia’s Constitution, in which Ennahda dropped the idea of Sharia Law as the foundation of the Tunisian State in favor of a constitution that protected religion from statist dominance and guaranteed liberty for political actors to “promote religious values in the public sphere”—in short, a constitution that was “not simply democratic but liberal-democratic” (p.140).  

Tunisia had another advantage that Egypt lacked: a set of independent civil society institutions that had a “stake in continued stability,” along with a “stake in avoiding a return to autocracy” (p.145).  But Tunisia’s success was largely political, with no evident payoff in the country’s economic fortunes. The “very consensus structures that helped Tunisia avoid the fate of Egypt,” Feldman warns, ominously but presciently, have “created conditions in which the underlying economic causes that sparked the Arab spring protests have not been addressed” (p.150).   

As if to prove Feldman’s point, this past summer Tunisia’s democratically-elected President Kais Saied, a constitutional law professor like Feldman, froze Parliament and fired the Prime Minister, “vowing to attack corruption and return power to the people. It was a power grab that an overwhelming majority of Tunisians greeted with joy and relief,” The New York Times reported.  One cannot help but wonder whether Tunisia is about to confront and answer the existential Lockean question in a manner similar to Egypt a decade ago.

Protests against Syrian President Bashar al-Assad began after both Ben Ali in Tunisia and Mubarak in Egypt had been forced out of office, and initially seemed to be replicating those of Tunisia and Egypt.  But the country degenerated into a disastrous civil war that has rendered the country increasingly dysfunctional.  The key to understanding why lies in the country’s denominational-sectarian divide, in which the Assad regime—a minority-based dictatorship of Alawi Muslims, followers of an off-shoot of Shiite Islam representing about 15 % of the Syrian population—had disempowered much of the country’s Sunni majority.  Any challenge to the Assad regime was understood, perhaps correctly, as an existential threat to Syria’s Alawi minority.  Instead of seeking a power-sharing agreement that could have prolonged his regime, Bashar sought the total defeat of his rivals.  The regime and the protesters were thus divided along sectarian lines and both sides “rejected compromise in favor of a winner-take-all struggle for control of the state” (p.78). 

The Sunnis challenging Assad hoped that Western powers, especially the United States, would intervene in the Syrian conflict, as they had in Libya.  United States policy, however, as Feldman describes it, was to keep the rebel groups “in the fight, while refusing to take definitive steps that would make them win.”  As military strategy, this policy “verged on the incoherent”  (p.90).  President Barack Obama wanted to avoid political responsibility for Bashar’s fall, if it came to that, in order to avoid the fate of his predecessor, President George W. Bush, who was considered politically responsible for the chaos that followed the United States intervention in Iraq in 2003.  But the Obama strategy did not lead to stability in Syria.  It had an opposite impact, notably by creating the conditions for the Islamic State, ISIS, to become a meaningful regional actor.

ISIS is known mostly for its brutality and fanaticism, such as beheading hostages and smashing precious historical artifacts.  While these horrifying attributes cannot be gainsaid, there is more to the group that Feldman wants us to see.  ISIS in his view is best understood as a utopian, revolutionary-reformist movement that bears some similarities to other utopian revolutionary movements, including John Calvin’s Geneva and the Bolsheviks in Russia in the World War I era.  The Islamic State arose in the aftermath of the failure and overreach of the American occupation of Iraq.  But it achieved strategic relevance in 2014 with the continuing breakdown of the Assad regime’s sovereignty over large swaths of Syrian territory, creating the possibility of a would-be state that bridged the Iraq-Syria border.  Without the Syrian civil war, “there would have been no Islamic State” (p.107), Feldman argues.

The Islamic State attained significant success through its appeal to Sunni Muslims disillusioned with modernist versions of political Islam of the type represented by the Muslim Brotherhood in Egypt and Ennahda in Tunisia.  With no pretensions of adopting democratic values and practices, which it considered illegitimate and un-Islamic, ISIS sought to take political Islam back to pre-modern governance.  It posited a vision of Islamic government for which the foundation was the polity “once ruled by the Prophet and the four ‘rightly guided’ caliphs who succeeded him in the first several decades of Islam” (p.102).

But unlike Al-Qaeda or other ideologically similar entities, the Islamic State actually conquered and held enough territory to set up a functioning state in parts of Syria.  Until dislodged by a combination of Western air power, Kurdish and Shia militias supported by Iran, and active Russian intervention, ISIS was able to put into practice its revolutionary utopian form of government.  As a “self-conscious, intentional product of an organized group of people trying to give effect to specific political ideas and to govern on their basis,” ISIS represents for Feldman the “strangest and most mystifying outgrowth of the Arab spring” (p.102).

* * *

Despite dispiriting outcomes in Syria and Egypt, alongside those of Libya and Yemen, Feldman is dogged in his view that democracy is not doomed in the Arabic-speaking world.  Feldman’s democratic optimism combines Aristotle’s notion of “catharsis,” a cleansing that comes after tragedy, with the Arabic notion of tragedy itself, which can have a “practical, forward looking purpose. It can lead us to do better” (p.162).  The current winter of Arab politics “may last a generation or more,” he concludes.  “But after the winter—and from its depths—always comes another spring” (p.162).  But a generation, whether viewed through the lens of the political theorist or that of the historian, is a long time to wait for those Arabic-speaking people yearning to escape autocracy, civil war, and terrorist rule.

Thomas H. Peebles 

Bethesda, Maryland 

November 10, 2021 

 

6 Comments

Filed under Middle Eastern History, Political Theory

Love Actually

 

Ann Heberlein, On Love and Tyranny:

The Life and Politics of Hannah Arendt

Translated from Swedish by Alice Menzies (Pushkin Press, 2021)

Before she became a celebrated New York public intellectual, Hannah Arendt (1906-1975) lived through some of the 20th century’s darkest moments. She fled her native Germany after Hitler came to power in 1933, living in France for several years.  In 1940, she spent time in two intern camps, then departed for the United States, where she resided for the second half of her life.  In 1950, Arendt became an American citizen, ending nearly two decades of statelessness.  The following year, she established her reputation as a serious thinker with The Origins of Totalitarianism, a trenchant analysis of how oppressive one-party systems came to rule both Nazi Germany and the Soviet Union in the first half of the 20th century.  As a commentator observed in The Washington Post, Arendt’s work diagnosed brilliantly the “forms of alienation and dispossession that diminished human dignity, threatened freedom and fueled the rise of authoritarianism.”

The Origins of Totalitarianism was one of a handful of older works that experienced a sudden uptick in sales in early 2017, after Donald Trump became president of the United States (George Orwell’s 1984 was another).  The authoritarian impulses that Arendt explained and Trump personified seem likely to be with us for the foreseeable future, both in the United States and other corners of the world.  For that reason alone, a fresh look at Arendt is welcome.  That is the contribution of  Ann Heberlein, a Swedish novelist and non-fiction writer, with On Love and Tyranny: The Life and Politics of Hannah Arendt.  

Heberlein’s work, ably translated from the original Swedish by Alice Menzies, constitutes the first major Arendt biography since 1982, when Elisabeth Young-Bruehl’s highly-acclaimed but dense Hannah Arendt: For Love of the World first appeared.  On Love and Tyranny, by contrast, is easy to read yet hits all the highlights of Arendt’s life and work.  Disappointingly, there are no footnotes and little in the way of bibliography. Heberlein makes use of the diaries of a key if problematic figure in Arendt’s life, philosopher Martin Heidegger, which only became public in 2014 and cast additional light on Heidegger’s Nazi sympathies.  But it is difficult to ascertain from the book itself what other new or different sources Heberlein utilized that might have been unavailable to Young-Bruehl.

Although Arendt studied philosophy as a university student, she preferred to describe herself as a political theorist.  But despite the reference to politics in her title, Heberlein’s portrait accents Arendt’s philosophic side.  She emphasizes how the turbulent circumstances that shaped Arendt’s life forced her to apply in the real world many of the abstract philosophical and moral concepts she had wrestled with in the classroom.  As the title suggests, these include love and tyranny,  but also good vs. evil, truth, obligation, responsibility, forgiveness, and reconciliation.

At Marburg University, where she entered in 1924 as an 18-year-old first year student, Arendt not only studied philosophy under Heidegger, already a rising star in German academic circles, but also began a passionate love affair with the man.  Heidegger was then nearly twice her age and married with two young sons (their affair is detailed in Daniel Maier-Katkin’s astute Stranger from Abroad, Hannah Arendt, Martin Heidegger: Friendship and Forgiveness, reviewed here  in 2013).   Arendt left Heidegger behind when she fled Germany in 1933, but after World War II re-established contact with her former teacher, by then disgraced because of his association with the Nazi regime. A major portion of Heberlein’s work scrutinizes Arendt’s subsequent, post-war relationship with Heidegger.

Heberlein also zeroes in on Arendt’s very different post-war relationship to a seemingly very different man, Adolph Eichmann, Hitler’s loyal apparatchik who was responsible for moving approximately 1.5 million Jews to Nazi death camps.  Arendt’s series of articles for The New Yorker on Eichmann’s trial in Jerusalem in 1961 became the basis for another of her best-known works, Eichmann in Jerusalem: A Report on the Banality of Evil, published in 1963, in which she portrayed Eichmann as neither a fanatic nor a pathological killer, but rather a stunningly mediocre individual, motivated more by professional ambition than by ideology.

The phrase “banality of evil,” now commonplace thanks to Arendt, followed her for the rest of her days. How the phrase applies to Eichmann is of course well-ploughed ground, to which Heberlein adds a few insights.  Less obviously, Heberlein lays the groundwork to apply the phrase to Heidegger.  Her analysis of the banality of evil suggests that the differences between Heidegger and Eichmann were less glaring in the totalitarian Nazi environment, where whole populations risked losing their ability to distinguish between right and wrong.

* * *

Arendt was the only child of Paul and Martha Arendt, prosperous, progressive, and secular German Jews.  Paul died when Hannah (born Johanna) was 7, but she remained close to her mother, who immigrated with her to the United States in 1941. Meeting with Heidegger as a first-year student in 1924 was for Arendt “synonymous with her entry into the world of philosophy,” Heberlein writes.  Heidegger was “The Philosopher personified: brilliant, handsome, poetic, and simply dressed” (p.28).  The Philosopher made clear to the first-year student that he was not prepared to leave his wife and family or the respectability of his academic position for her.  She met him whenever he had time and was able to escape his wife.

The unbalanced Arendt-Heidegger relationship “existed solely in the shadows: never acknowledged, never visible”, (p.40) as Heberlein puts it.  Arendt was never able to call Heidegger her partner because she “possessed him for brief intervals only, and the fear of losing him was ever-present” (p.41).   Borrowing a perspective Heberlein attributes to Kierkegaard and Goethe, she describes Arendt’s love for Heidegger as oscillating “between great joy and deep sorrow—though mostly sorrow” (p.31).  For these writers, whom Arendt knew well, love consisted “largely of suffering, of longing, and of distance” (p.31).  The 18-year-old, Heberlein concludes, was “struck down by a passion, possibly even an obsession, that would never fade” (p.31).

Arendt left Marburg after one year, ending up at Heidelberg University.  She later admitted that she needed to get away from Heidegger.  But she continued to see him while she wrote her dissertation at Heidelberg on St. Augustine’s conception of love. Her advisor there was the esteemed theologian and philosopher Karl Jaspers, with whom she remained friends up to his death in 1969.

After university, Arendt worked in Berlin, where she met Gunther Stern, a journalist, poet and former Heidegger student who was closely associated with the communist Berthold Brecht.  Arendt married Stern in 1929 at age 23.  Sometime during her period in Berlin, she cut off all contact with Heidegger.  But after the Nazis came to power, Arendt began hearing alarming rumors about several specific anti-Semitic actions attributed to Heidegger at Fribourg University, where he had been appointed rector.  She asked him in a letter to clarify by responding to the rumors, and received back a self-pitying, aggressive response that she found entirely unconvincing.

1933 was also the year Arendt and her mother left Germany and wound up in Paris. There she met Heinrich Blücher, a self-taught, left wing German Jewish political activist. She and Stern had by then been living apart for several years, and she divorced him to marry Blücher in early 1940. The couple remained together until Blücher’s death in 1970. They were sent to separate intern camps just prior to the fall of France in 1940, but escaped together through Spain to Portugal, where they immigrated to the United States in 1941 and settled in New York.

Arendt’s first return trip to Europe came in late 1949 and early 1950.  With Blücher’s approval, she sought out her former teacher, then in Fribourg, meeting with Heidegger and his wife Elfried in February 1950.  Understandably suspicious, Elfried seems to have understood that Arendt was in a position to help rehabilitate her husband, besmirched by his association with the Nazi regime, and accepted that he wanted Arendt to again be part of his life.  Arendt maintained a warm relationship with her former professor until her death in 1975 (Heidegger died less than a year later), writing regularly and meeting on several occasions.

In the post-war years, as Arendt’s star was rising, she became Heidegger’s unpaid agent, working to have his writings translated into English and negotiating contracts on his behalf.  She also became an enthusiastic Heidegger defender, going to great lengths to excuse, smooth over, and downplay his association with Nazism.  She once compared Heidegger to Thales, the ancient Greek philosopher who was so busy gazing at the stars that he failed to notice that he had fallen into a well.

On the occasion of Heidegger’s 80th birthday in 1969, she delivered an over-the-top tribute to her former professor, reducing Heidegger’s dalliances with Nazism to a “10-month error,” which in her view he corrected quickly enough, “more quickly and more radically than many of those who later sat in judgment over him” (p.236).  Arendt argued that Heidegger had taken “considerably greater risks than were usual in German literary and university life during that period” (p.237).  As Heberlein points out, Arendt’s tribute was a counter-factual fantasy: there was no empirical support for this whitewashed version of the man.

Heidegger had openly endorsed Nazi “restructuring” of universities to exclude Jews when he became rector at Fribourg in 1933 and his party membership was well known. His diaries, published in 2014, made clear that he was aware of the Holocaust, believed it was at least partly the Jews’ fault and, even though he ceased to be active in party affairs sometime in the mid-1930s, remained until 1945 a “fully paid-up, devoted supporter of Adolph Hitler” (p.238).  Arendt of course didn’t have access to these diaries when she rose to Heidegger’s defense, but it seems unlikely they would have changed her perspective.

Arendt’s 1969 tribute left little doubt she had found her way to forgive Heidegger for his association and support for a regime that had murdered millions of her fellow Jews, wreaked destruction on much of Europe, and forced her to flee her native country to start her life anew an ocean away. But why? Heberlein writes that forgiveness for Arendt was the conjunction of the conflicting powers of love and evil.  “Without evil, without betrayal, insults and lies, forgiveness would be unnecessary; without love, forgiveness would be impossible” (p.225).  Arendt found the strength to forgive Heidegger in the “utterly irrational emotion” that was love. Her love for Heidegger was “strong, overwhelming, and desperate. The power of the passion Hannah felt for Martin was stronger than the sorrow she felt at his betrayal” (p.226).  But whether it was right or wrong for her to forgive Heidegger, Heberlein demurely concludes, is a question only Arendt could have answered.

Did Arendt also forgive Eichmann for his direct role in transporting a staggering number of Jews to death camps? Is forgiveness wrapped within the notion of the banality of evil? Daniel Maier-Katkin suggests in his study of the Arendt-Heidegger relationship that in her experience with Heidegger, Arendt may have come to the notion of the banality of evil “intuitively and without clear articulation.”  That experience may have prepared her to comprehend that each man had been “transformed by the total moral collapse of society into an unthinking cog in the machinery of totalitarianism.”

Heberlein’s analysis of Eichmann leads to the conclusion that the notion of the banality of evil was sufficiently elastic to embrace Heidegger.  Heberlein sees the influence of Kant’s theory of “radical evil” in Arendt’s notion of the banality of evil.  For Arendt, as for Kant, evil is a form of temptation, in which the desires of individuals overrule their “duty to listen to, and act in accordance with, goodwill” (p.198).   The antidote to evil is not goodness but reflection and responsibility.  Evil grows when people “cease to think, reflect, and choose between good and evil, between taking part or resisting” (p.138).  Arendt’s sense of evil recognizes an uncomfortable truth that seems as applicable to   Heidegger as to Eichmann, that most people have a tendency to:

follow the path of least resistance, to ignore their conscience and do what everyone else is doing.  As the exclusion, persecution, and ultimately, annihilation of Jews became normalized, there were few who protested, who stood up for their own principles (p.199).

For Arendt, forgiveness of such persons is possible. But not all evil can be explained in terms of obedience, ignorance, or neglect. There is such a thing as evil that is “as incomprehensible as it is unforgiveable” (p.200).   In Heberlein’sinterpretation of Arendt, the genuinely evil person is the one who is “leading the way, someone initiating the evil, someone creating the context, ideology, or prejudices necessary for the obedient masses to blindly adopt” (p.201).  Whether Eichmann falls outside this standard for genuine evil is debatable. But the standard could comfortably exclude Heidegger, as Arendt had in effect argued in her 1969 tribute to her former teacher.

Arendt compounded her difficulties with the separate argument in Eichmann in Jerusalem that the Jewish councils that the Nazis established in occupied countries cooperated in their own annihilation.  The “majority of Jews inevitably found themselves confronted with two enemies – the Nazi authorities and the Jewish authorities,” Arendt wrote.  The “pathetic and sordid” behavior of Jewish governing councils was for Arendt the “darkest chapter” of the Holocaust – darker than the mass shootings and gas chambers — because it “showed how the Germans could turn victim against victim.”

The notion that Arendt was blaming the Jews for their persecution “quickly took hold,” Heberlein writes, and she was “forced to put up with questions about why she thought the Jews were responsible for their own deaths, in virtually every interview until she herself died” (p.192).  After Eichmann in Jerusalem, Arendt was shunned by many former colleagues and friends, repeatedly accused of being an anti-Israel, self-hating Jew, “heartless and devoid of empathy . . . cold and indifferent” (p.192).  When her husband died in 1970, Arendt’s isolation increased.  She was again in exile, this time existential, which surely enhanced her emotional attachment to Heidegger, the sole remaining link to the world of her youth.

* * *

Arendt’s ardent post-war defense of Heidegger, while generating little of the brouhaha that surrounded Eichmann in Jerusalem, is also a critical if puzzling piece in understanding her legacy.  Should we consider the continuation of her relationship with Heidegger as the simple but powerful triumph of Eros, an enduring schoolgirl crush that even the horrors of Nazism and the Holocaust were unable to dispel?  Heberlein’s earnest biography points us inescapably in this direction.

Thomas H. Peebles

La Châtaigneraie, France

October 12, 2021

[NOTE: A nearly identical version of this review has also been posted to the Tocqueville 21 blog  maintained in connection with the American University of Paris’ Tocqueville Review and its Center for Critical Democracy Studies]

 

 

4 Comments

Filed under History, Intellectual History, Political Theory

Viewing Responsibility for Human Rights Through a Forward-Looking Lens

 

 

 

Kathryn Sikkink, The Hidden Face of Rights:

Toward a Politics of Responsibilities (Yale University Press, 2020)

Kathryn Sikkink, Professor at the Harvard Kennedy School of Government, is one of the leading academic experts on international human rights law­­—the body of principles arising out of a series of post-World War II human rights treaties, conventions, and other international instruments. Recently, I reviewed her Evidence for Hope: Making Human Rights Work in the 21st Century here.  In that work, Sikkink took on a host of critics of the current state of international human rights law who had challenged both its legitimacy and its effectiveness.  Before Evidence for Hope, she was the author of the highly acclaimed Justice Cascade: How Human Rights Prosecutions Are Changing World Politics, where she argued forcefully for holding individual state officials, including heads of state, accountable for human rights violations.

Now, Sikkink asks us to look at human rights, and especially how we can best implement those rights, through a different lens.  In her most recent work, The Hidden Face of Rights: Toward a Politics of Responsibilities, portions of which were originally delivered as lectures at Yale University’s Program in Ethics, Politics and Economics, Sikkink argues that we need to increase our focus on the duties, obligations, and responsibilities undergirding human rights. Although “duties,” “obligations,” and “responsibilities” are nearly functional equivalents, “responsibilities” is Sikkink’s preferred term. Moreover, Sikkink is concerned with what she terms “forward-looking” rather than “backward-looking” responsibilities.

Forward-looking responsibility turns largely on the development of norms, the voluntary acceptance of mutual responsibilities about appropriate behavior.  It stands in contrast to backward-looking responsibilities, which are based on a “liability model” that asks who is responsible for a violation of human rights and how that person or institution can be held accountable — or responsible.  Sikkink seeks to supplement rather than supplant the liability model, describing it as appropriate in some contexts but not others.  Although necessary, backward-looking responsibilities “cannot address many of the complex, decentralized issues that characterize human rights today” (p.40), she contends.

For Sikkink, forward-looking responsibility is ethical and political, not legal.  She is not arguing to make forward-looking responsibilities legally binding.  Nor is she seeking to create new rights—only to implement existing ones more effectively.  But she uses the term ‘human rights’ broadly, to include the political, civil, economic, and social rights embodied in the major post-war treaties and conventions, along with new rights, such as the right to a clean environment and to freedom from sexual assault.

The crux of Sikkink’s argument is that voluntary acceptance of norms ‑ not fear of sanctions ‑ is in most cases a more effective path to full implementation of human rights.  Sustaining and reinforcing norms entails a pragmatic, “what-might-work” approach, brought about by “networked responsibilities,” one of her key terms, a collective effort in which all those connected to a given injustice — the “agents of justice,” more often private individuals than state actors— step forward to do their share. One of Sikkink’s principal objectives is to bring the theory of human rights into line with existing practice.

Sikkink notes that the activist community charged with implementation of human rights already has “robust practices of responsibility. But it does not yet have explicit norms about the responsibility of non-state actors in implementing human rights” (p.36).  Rights activists are reluctant to talk about responsibilities of non-state actors out of concern that such talk might “take the pressure off the state, risk blaming the victim, underplay the structural causes of injustice, or crowd out other more collective forms of political action” (p.5).  Human rights activists, Sikkink emphasizes, while avoiding recognizing responsibility explicitly, have nonetheless implicitly “assumed responsibility and worked in networks with other agents of justice to bring about change” (p.127).  In this sense, responsibilities are the “hidden face of rights, present in the practices of human rights actors, but something that activists don’t talk about” (p.5).

the first third of the book, Sikkink establishes the theoretical framework to a forward-looking conception of human rights implementation. In the last two-thirds, she applies her forward-looking model to five issues that are close to her heart and home: voting, climate change, sexual assault, digital privacy, and free speech on campus.  Her discussion of these issues is decidedly US-centric, based mostly on how they arise at Harvard and, to a lesser extent, on other American university campuses, with only minimal reference to what a forward-looking approach to implementation of the same rights might entail in other countries.  Among the five issues, voting receives the most extensive treatment, about one-third of the book, as much as the other four topics combined.  Several factors prompted me to question whether voting is the best example of forward-looking responsibility in operation.

* * *

In the voting context, forward-looking responsibility means above all the acceptance of a norm that considers voting a non-negotiable responsibility of citizenship, much like serving jury duty and paying taxes. But we also have a “networked responsibility” to convince others both to accept the voting norm, and to assist them in executing that right.  Sikkink’s discussion zeroes in on how to increase voter turnout among Harvard students and, through focus-group sessions with such students, examines the challenges of persuading them to accept the voting norm.

Sikkink recognizes that Harvard students are far from representative of American university students, let alone of Americans generally.  Although at the pinnacle of privilege in American society, Harvard students, like their peers at other universities, nonetheless under-participate in local and national elections. The difficulties they encounter in registering to vote and casting their ballots are a telling indication that the electoral system is complex for far wider swaths of the American public.  But focusing on them leaves out the consideration of how to reach and persuade less privileged groups.  A few of Stacey Abrams’s insights would have been useful.

Skkink’s book, moreover,  went to press prior to the November 2020 Presidential Election, an election in which approximately 159 million Americans voted — a record turnout, constituting about two-thirds of the eligible electorate and seven whopping percentage points higher than the 2016 turnout. Yet, the election and its aftermath have given rise to unprecedented turmoil, including unsupportable claims of a “stolen” election and an uprising at the U.S. Capitol in January, fundamentally altering the national conversation over voting in the United States from what it was a year ago. Sikkink’s concerns about voter apathy no longer seem quite so central to that conversation.

Rather, more than six months after the election, a substantial minority of the American electorate still adheres to the notion of a “stolen” election, despite overwhelming evidence that the official election results were fully accurate within any reasonable margin of error.  In the aftermath of the election, furthermore, state legislatures in several states have adopted or have under consideration measures that seem designed specifically to discourage some of America’s most vulnerable groups from voting, under the guise of preventing voter fraud — even though evidence of actual fraud in the 2020 election was scant to non-existent.  Sikkink foresees this issue when she notes that state officials in some parts of the United States “do not want to expand voter turnout and even actively suppress it” (p.111).   In such situations, she writes, “networked responsibility of non-state actors to change voting norms and practices is all the more important” (p.111).  If Sikkink were writing today, it seems safe to say that she would elaborate upon this point at greater length.

Unlike some of the rights Sikkink discusses, however, voting to select a country’s leaders is firmly established in written law.  But the responsibility side of this unquestioned right must compete with a plausible claim that in a democratic society based on freedom of choice, a right not to vote should be recognized as a legitimate exercise of that freedom — a way, for instance, of expressing one’s disenchantment with the electoral and political system or, more parochially, dissatisfaction with the candidates offered on the ballot.  Many of the students in the Harvard focus group expressed the view that voting should be “situational and optional” (p.92).  Sikkink emphatically rejects this argument, suggesting at one point that casting a blank ballot is the only responsible way to express such views: “if one is going to refuse to vote in protest, it must be just as hard as voting” (p.121), she writes.

By coincidence, as I was wrestling with Sikkink’s arguments against recognizing a right not to vote in June of this year — and finding myself less than fully convinced — I was following presidential elections in Iran, which witnessed its lowest voter turnout in four decades: slightly less than 50%, with another 14% casting blank ballots.  Dissidents in Iran organized a campaign this year that urged abstention as the most principled way to express opposition to what the campaign leaders maintained was an intractably tyrannical regime.

The abstention campaign argued that the voting process for the election had been structured to eliminate any serious reform candidates; that the Iranian government since 1979 had an extensive track record of voter intimidation and manipulation of vote counting; and that the Iranian government uses the usually high turnout rates (85% for the 2009 presidential election; over 70% in 2013 and 2017) to affirm its own legitimacy. In short, there seemed to be little reason why Iranians could anticipate that the election would be “free and fair,” which may be the necessary predicate to Sikkink’s rejection of a right not to vote, a point she may wish to elaborate upon subsequently (were she writing today, Sikkink might also address the “freedom” not to wear a mask or to be vaccinated during a pandemic; I also wondered how Sikkink would react to regional French elections, which took place immediately after the Iranian election, in which an astounding two-thirds of the electorate abstained).

If Sikkink’s application of forward-looking responsibility to voting contains rough edges, her application to climate change makes for a near perfect fit. While it is obviously of utmost importance to know the underlying causes of climate change and to understand how we reached the current crisis, backward looking responsibility — seeking to hold responsible those who contributed to the crisis — has only limited utility.  Without letting big fossil fuel polluters off the hook for their disproportionate contribution to the current state of affairs, backward looking responsibility “must be combined with forward-looking responsibilities,” Sikkink argues, “including the responsibilities of actors who are not directly to blame” (p.54).  When it comes to climate change, we are all “agents of justice” if we want to preserve a livable planet.

The backward-looking liability model remains critical when applied to the right to be free from sexual assault, a large umbrella category that includes all non-consensual sexual activity or contact, including but not limited to rape.  Any effort to limit sexual assault must “first hold perpetrators responsible—and, where appropriate, criminally accountable” (p.139), Sikkink writes. But we also need to “think about the forward-looking responsibility of multiple agents of justice, especially how potential victims, as capable agents, can take measures to prevent future violence” (p.138).

Digital privacy, Sikkink explains, transcends the interest of individuals to limit the dissemination of their own personal information.  She describes how we can inadvertently expose others to online privacy invasions.  In protecting privacy online, we need to become proficient in what she terms “digital civics,” another term for the forward-looking responsibility of Internet users to help ensure both their own privacy rights and those of other users.

A separate but related aspect of digital civics is learning how to recognize and not spread disinformation, “fake news,” thereby raising questions about the bounds of the right to free speech online.  We all have an ethical and political responsibility, if not quite a legal one, to evaluate sources and to refrain from sharing (or “liking”) information that does not appear to have sound factual grounding, Sikkink argues. The extent of the bounds of free speech also arises on campus in finding a balance between the right to speak itself, and the right to protest speech that one finds offensive.

On university campuses today, many students feel they have an obligation to defend fellow students, and oppressed people generally, against hurtful and degrading speech. Sikkink notes that over half the students responding to one survey thought it was acceptable to shout at speakers making what they perceived to be offensive statements, while 19% said it was acceptable to use violence to prevent what is perceived to be abusive speech. These are not responsible exercises of one’s right to protest offensive speech, Sikkink responds.  Violence and drowning out the speech of others are more than just “problematic from the point of view of the ethic of responsibility” (p.136).  Pragmatically, these forms of protest have been demonstrated to be unlikely to generate support for the ideas espoused by those using such tactics.

* * *

Pragmatism thoroughly infuses Sikkink’s notion of forward-looking responsibility, as applied not only to campus speech and the other rights discussed here but, presumptively, to the full range of recognized human rights.   Her pragmatism animates the question she closes the book with, literally her bottom line: in addition to — or even instead of — asking who is to blame, we should ask: “What together we can do?” (p.148).  As her fellow academic theorists evaluate the fresh perspective that Sikkink brings to international human rights in this compact but thought-provoking volume, they will want to weigh in on the pertinence of this question to our understanding of those rights.

 

Thomas H. Peebles

Caen, France

August 21, 2021

[NOTE: A nearly identical version of this review has also been posted to the Tocqueville 21 blog, maintained in connection with the American University of Paris’ Tocqueville Review and its Center for Critical Democracy Studies]

 

 

 

3 Comments

Filed under Political Theory, Rule of Law

Deciphering a Confounding Thinker

 

 

Robert Zaretsky, The Subversive Simone Weil:

A Life in Five Ideas (University of Chicago Press)

 

Simone Weil is considered today among the foremost twentieth-century French intellectuals, on par with her luminous contemporaries Simone de Beauvoir, Jean-Paul Sartre, and Albert Camus. And yet she was not widely known when she died at age 34 in 1943. Although she wrote profusely, only small portions of her writings were published during her lifetime. Much of her written work was left in private notebooks and published posthumously. It was only after the Second World War, as Weil’s writings increasingly came to light, that a comprehensive picture of her thinking emerged —comprehensive without necessarily being coherent. In The Subversive Simone Weil: A Life in Five Ideas, Robert Zaretsky attempts to provide this coherence.

Indeed, Weil was a confounding thinker whose body of thought and the life she lived seem awash in contradictions. As Zaretsky notes at the outsetWeil was:

an anarchist who espoused conservative ideals, a pacifist who fought in the Spanish Civil War, a saint who refused baptism, a mystic who was a labor militant, a French Jew who was buried in the Catholic section of an English cemetery, a teacher who dismissed the importance of solving a problem, [and] the most willful of individuals who advocated the extinction of the self (p.2).

 Zaretsky, a professor at the University of Houston and one of the Anglophone world’s most fluent writers on French intellectual and cultural history, aims not so much to dispel these contradictions as to distill Weil’s intellectual legacy, contradictions and all, into five core ideas encapsulating the body of political, social, and theological thought she left behind. These five ideas are: affliction, attention, resistance, rootedness, and goodness—each the object of a separate chapter.

Unsurprisingly, these five Weilian ideas are far more intricate and multi-faceted than the single words suggest, and they are inter-related, with what Zaretsky terms “blurred borders” (p.14).  Moreover, the five ideas are presented in approximate chronological order: the first three chapters on affliction, attention, and resistance concern mostly Weil in the 1930s; while the last two on rootedness and goodness primarily cover her wartime years from 1940 to 1943—her most productive literary period.

Each chapter can be read as a standalone essay, and Zaretsky would likely discourage us from searching too eagerly for threads that unite the five into an overarching narrative. But there is one connecting thread which provides context for the apparent contradictions in Weil’s life and thought: collectively, the five ideas tell the story of Weil’s transformation from an exceptionally empathetic yet otherwise conventional 1930s non-communist, left-wing intellectual—Jewish and secular—to someone who in her final years found commonality with conservative political and social thought, embraced Catholicism and Christianity, and was profoundly influenced by religious mysticism. Although not intended as a biography in the conventional sense, The Subversive Simone Weil begins with a short but helpful overview of Weil’s abbreviated life before plunging into her five ideas.

* * *

Weil was born in 1909 and brought up in a progressive, militantly secular bourgeois Jewish family in Paris. Her older brother André became one of the twentieth century’s most accomplished mathematicians. She graduated in 1931 from France’s renowned École Normale Supérieure, the same school that had accorded diplomas to Jean-Paul Sartre and Raymond Aron a few years earlier.  After ENS, she took three secondary teaching positions in provincial France, and also managed to find her way to local factories, where she taught workers in evening classes and with limited success did some of the hard factory work herself.

In 1936, Weil joined the Republican side in the Spanish Civil War, and was briefly involved in combat operations before she inadvertently stepped into a vat of boiling cooking oil, severely injuring her foot. After she returned to France to allow her injury to heal, she had three seemingly genuine mystical religious experiences that set in motion what Zaretsky characterizes as rehearsals for her “slow and never quite completed embrace of Roman Catholicism” (p.134).  When Nazi Germany invaded France in 1940, Weil and her parents caught the last train out of Paris for Marseille, where they stayed for almost two years before leaving for New York. While in Marseille, Weil was deeply influenced by Joseph-Marie Perrin, a nearly blind Dominican priest, and came close but stopped short of a formal conversion to Catholicism.

Weil left her parents in New York for London, where she joined Charles de Gaulle’s government-in-exile, with ambitions that never materialized to return to France to battle the Nazis directly. While in London, her primary responsibility was to work on reports detailing a vision for a liberated and republican France. Physically frail most of her life, Weil suffered from migraines, and may have been on a hunger strike when she died of complications from tuberculosis in 1943, in a sanatorium south-east of London.

* * *

Malheur was Weil’s French term for “affliction.” This is the first of the five ideas that Zaretsky distills from Weil’s life and thought, in which we see Weil at her most political. Her idea of affliction appears to have arisen principally from her experiences working in factories early in her professional career.  Yet, affliction for Weil was the condition not just of factory workers, but of nearly all human beings in modern, industrial society—the “unavoidable consequence of a world governed by forces largely beyond our comprehension, not to mention our control” (p.36).  Affliction was “ground zero of human misery” (p.36), entailing psychological degradation as much as physical suffering.

The early Weil was attracted politically to anarcho-syndicalism, a movement that urged direct action by workers as the means to achieve power in depression-riddled 1930s France, with direct democracy of worker co-operatives as its end. In these years, Weil was an “isolated voice on the left who denounced communism with the same vehemence as she did fascism” (p.32), Zaretsky writes, comparing her to George Orwell and Albert Camus. With what Zaretsky describes as “stunning prescience” (p.32), she foresaw the foreboding consequences of totalitarianism emerging both in Stalin’s Russia and Hitler’s Germany.

Attention, sometimes considered Weil’s central ethical concept, involves how we see the world and others in it. But it is an elusive concept, “supremely difficult to grasp”  (p.46).  Attention was attente in French: waiting, which requires the canceling of our desires.  Attention takes place in what Zaretsky terms the world’s salle d’attente, its waiting room, where we “forget our own itinerary and open ourselves to the itineraries of others” (p.54).  Zaretsky sees the idea of attention at work in Weil’s approach to teaching secondary school students, where her emphasis was on identifying problems rather than finding solutions. She seemed to be telling her students that it’s the going there, not getting there, that counts. Although not discussed by Zaretsky, there are echoes of Martin Buber’s “I-Thou” relationship in Weil’s notion of attention.

Zaretsky refrains from terming the Spanish Civil War a turning point for Weil, but it seems to have been just that.  Her brief experience in the war, combined with a growing realization of the existential threat which the Nazis and their fascist allies posed to European civilization, prompted her to revise her earlier commitment to pacifism. This is one consequence of resistance—Zaretsky’s third idea — which aligned Weil with the ancient Stoics and Epicureans, who taught their followers to resist recklessness, panic and passion. For Weil, resistance was an affirmation that the “truly free individual is one who takes the world as it is and aligns with it as best they can” (p.64), as Zaretsky puts it. Weil’s Spanish Civil War experience also gave rise to a growing conviction that “politics alone could not fully grasp the human condition” (p.133).

Rootedness—the fourth idea—arises out of Weil’s visceral sense of having been torn from her native France.  Déracinement, uprooting, was the founding sentiment for The Need for Roots, her final work, in which she emphasized how the persistence of a people is tied to the persistence of its culture—a community’s “deeply engrained way of life, which bends but is not broken as it carries across generations” (p.99).  Rootedness takes place in a “finite and flawed community” and became for Weil the “basis for a moral and intellectual life.” A community’s ties to the past “must be protected for the very same reason that a tree’s roots in the earth must be protected: once those roots are torn up, death follows” (p.126).

There is no evidence that Weil read either the Irish Whig Edmund Burke or the German Romantic Johann Herder, leading conservatives of the late eighteenth and early nineteenth centuries.  Nonetheless, Zaretsky finds considerable resonance between Weil’s sense of rootedness and Burke’s searing critique of the French Revolution, as well as Herder’s rejection of the universalism of the Enlightenment in favor of preserving local and linguistic communities.  Closer to her own time, Weil’s views on community aligned surprisingly with those of Maurice Barrès and Charles Maurras, two leading early twentieth-century French conservatives whose works turned on the need for roots. Zaretsky also finds commonalities between Weil and today’s communitarians, who reject the individualism of John Rawls.

But Weil also applied her views on rootedness to French colonialism, putting her at odds with her wartime boss in London, Charles de Gaulle, who was intent upon preserving the French Empire.  She perceived no meaningful difference between what the Nazis had done to her country—invaded and conquered—and what the French were doing in their overseas colonies.  Weil was appalled by the notion of a mission civilisatrice, a civilizing mission underlying France’s exertion of power overseas. It was essential for Weil that the war against Germany “not obscure the brute fact of French colonization of other peoples” (p.111).  Although Weil developed her idea of rootedness in the context of forced deportations brought about by Nazi conquests, she recognized that rootlessness can occur without ever moving or being moved. Drawing upon her idea of affliction, Weil linked this form of uprooting to capitalism and what the nineteenth-century English commentator Thomas Carlyle termed capitalism’s “cash nexus.”

Zaretsky’s final chapter on Goodness addresses what he terms Weil’s “brilliant and often bruising dialogue with Christianity” (p.134), the extension of her three mystical experiences in the late 1930s.  The battle was bruising, Zaretsky indicates, because as a one-time secular Jew Weil’s desire to surrender wholly to the Church’s faith ran up against her indignation at much of its history and dogma.  “Appalled by a religion with universal claims that does not allow for the salvation of all humankind,” Weil “refused to separate herself from the fate of unbelievers. Anathema sit, the Church’s sentence of banishment against heretics filled Weil with horror” (p.135).  Yet, in her final years, Catholicism became the “substance and scaffolding of her worldview” (p.34), Zaretsky writes.

But Zaretsky’s emphasis is less on Weil’s theological views than on how she found her intellectual bridge to Christianity through the ancient Greeks, especially the thought of Plato.  Ancient Greek poetry, art, philosophy and science all manifested the Greek search for divine perfection, or what Plato termed “the Good.”  For Weil, faith appears to have been the pursuit of Plato’s Good by other means. The Irish philosopher and novelist Iris Murdoch, who helped introduce Weil to a generation of British readers in the 1950s and 1960s, explained that Weil’s tilt toward Christianity amounted to dropping one “o” from the Good.

* * *

Simone Weil was a daunting figure, intimidating perhaps even to Zaretsky, who avers that her ability to plumb the human condition “runs so deep that it risks losing those of us who remain near the surface of things” (p.38).  Zaretsky, however, takes his readers well below the surface of her body of thought in this eloquent work, producing a comprehensible structure for understanding an enigmatic thinker. His work should hold the interest of readers already familiar with Weil and those encountering her for the first time.

Thomas H. Peebles

La Châtaigneraie, France

July 31, 2021

[NOTE: A nearly identical version of this review has also been posted to the Tocqueville 21 blog, maintained in connection with the American University of Paris’ Tocqueville Review and its Center for Critical Democracy Studies]

 

 

3 Comments

Filed under French History, Intellectual History, Political Theory, Religion

Converging Visions of Equality

 

Peniel E. Joseph, The Sword and the Shield:

The Revolutionary Lives of Malcolm X and Martin Luther King, Jr. (Basic Books)

[NOTE: A version of this review has been posted to the Tocqueville 21 blog: https://tocqueville21.com/books/king-malcolm-x-civil-rights/.  Tocqueville 21 takes its name from the 19th century French aristocrat who gave Americans much insight into their democracy.  It seeks to encourage in-depth thinking about democratic theory and practice, with particular but by no means exclusive emphasis on the United States and France.  The site is maintained in connection with the American University of Paris’ Tocqueville Review and its Center for Critical Democracy Studies].

Martin Luther King, Jr., and Malcolm X met only once, a chance encounter at the US Capitol on March 26, 1964.  The two men were at the Capitol to listen to a debate over what would become the Civil Rights Act of 1964, a measure that banned discrimination in employment, mandated equal access to most public facilities, and had the potential to be the most consequential piece of federal legislation on behalf of equality for African-Americans since the Reconstruction era nearly a century earlier.  There wasn’t much substance to the encounter. “Well, Malcolm, good to see you,” King said.  “Good to see you,” Malcolm responded. There may have been some additional light chitchat, but not much more.  Fortunately, photographers were present, and we are the beneficiaries of several iconic photos of the encounter.

That encounter at the Capitol constitutes the starting point for Peniel Joseph’s enthralling The Sword and the Shield: The Revolutionary Lives of Malcolm X and Martin Luther King, a work that has some of the indicia of a dual biography, albeit highly condensed.  But Joseph, a professor at the University of Texas at Austin who has written prolifically on modern African American history, places his emphasis on the two men’s intellectual journeys.  Drawing heavily from their speeches, writings and public debates, Joseph challenges the conventional view of the two men as polar opposites who represented competing visions of full equality for African Americans.  The conventional view misses the nuances and evolution of both men’s thinking, Joseph argues, obscuring the ways their politics and activism came to overlap.  Each plainly influenced the other.  “Over time, each persuaded the other to become more like himself” (p.13).

My final stages of this review on the convergence of the two men’s thinking coincided with the trial of Derek Chauvin for the killing of George Floyd last May, along with the recent killing of still another black man, Daunte Wright, in the same Minneapolis metropolitan area.  Watching and reading about events in Minneapolis, I couldn’t help concluding that the three familiar words “Black Lives Matter”  –  the movement that led demonstrations across the country and the world last year to protest the Floyd killing — also neatly encapsulate the commonalities that Joseph identifies in The Sword and the Shield.

* * *

In March 1964, King was considered the “single most influential civil rights leader in the nation” (p.2), Joseph writes, whereas Malcolm, an outlier in the mainstream civil rights movement, was “perhaps the most vocal critic of white supremacy ever produced by black America” (p.4).    The two men shared extraordinary rhetorical and organizational skills.  Each was a charismatic leader and deep thinker who articulated in galvanizing terms his vision of full equality for African Americans.  But these visions sometimes appeared to be not just polar opposites but mutually exclusive.

In the conventional view of the time, King, the Southern Baptist preacher with a Ph.D. in theology, deserved mainstream America’s support as the civil rights leader who sought integration of African Americans into the larger white society, and unfailingly advocated non-violence as the most effective means to that end.  White liberals held King in high esteem for his almost religious belief in the potential of the American political system to close the gap between its lofty democratic rhetoric and the reality of pervasive racial segregation, discrimination and second-class citizenship, a belief Malcolm considered naïve.

A high school dropout who had served time in jail, Malcolm became the most visible spokesman for the Nation of Islam (NOI), an idiosyncratic American religious organization that preached black empowerment and racial segregation.  Often termed a “black nationalist,” Malcolm found the key to full equality in political and economic empowerment of African American communities.  He considered racial integration a fool’s errand and left open the possibility of violence as a means of defending against white inflicted violence.  He seemed to embrace some form of racial separation as the most effective means to achieve full equality and improve the lives of black Americans – a position that the media found to be ironically similar to that of the hard-core racial segregationists with whom both he and King were battling.

But Joseph demonstrates that Malcolm was moving in King’s direction at the time of their March 1964 encounter.  Coming off a bitter fallout with the NOI and its leader, Elijah Muhammad, he had cut his ties with the organization just months before the encounter.  He had traveled to Washington to demonstrate his support for the civil rights legislation under consideration.  Thinking he could make a contribution to the mainstream civil rights movement, Malcolm sought an alliance with King and his allies.  Although that alliance never materialized, King began to embrace positions identified with Malcolm after the latter’s assassination less than 11 months later, stressing in particular that economic justice needed to be a component of full equality for African Americans.  King also became an outspoken opponent of American involvement in the war in Vietnam, of which Malcolm long been had critical.

Singular events had thrust both men onto the national stage.  King rose to prominence as a newly-ordained minister who at age 26 became the most audible voice of the 1955-56 Montgomery, Alabama, bus boycott, after Rosa Parks famously refused to give up her seat on a public bus to a white person.  Malcolm’s rise to fame came in 1959 through a nationally televised 5-part CBS documentary on the NOI, The Hate that Hate Produced, hosted by then little-known Mike Wallace.  The documentary was an immediate sensation.  It was a one-sided indictment of the NOI, Joseph indicates, intended to scare and outrage whites.  But it made Malcolm and his NOI boss Elijah Muhammad heroes within black communities across the country.  King seemed to buy into the documentary’s theme, describing the NOI as an organization dedicated to “black supremacy,” which he considered “as bad as white supremacy” (p.85).

But even at this time, each man had connected his US-based activism to anti-colonial movements that were altering the face of Africa and Asia.  Both recognized that the systemic nature of racial oppression “transcended boundaries of nation-states” (p.73).    Malcolm made his first trip abroad in 1959, to Egypt and Nigeria.  The trip helped him “internationalize black political radicalism,” by linking domestic black politics to the “larger world of anti-colonial and Third World liberation movements” (p.18-19), as Joseph puts it.  King, whose philosophy of non-violence owed much to Mahatmas Gandhi, visited India in 1959, characterizing himself as a “‘pilgrim’ coming to pay homage to a nation liberated from colonial oppression against seemingly insurmountable odds”  (p.80).   After the visit, he “proudly claimed the Third World as an integral part of a worldwide social justice movement” (p.80).

After his break with the NOI and just after his chance encounter with King at the US Capitol, Malcolm took a transformative five-week tour of Africa and the Middle East in the spring of 1964.  The tour put him on the path to becoming a conventional Muslim and prompted him to back away from anti-white views he had expressed while with the NOI.  In Mecca, Saudi Arabia, he professed to see “sincere and true brotherhood practiced by all colors together, irrespective of their color.” (p.188).   He went on to Nigeria and “dreamed of becoming the leader of a political revolution steeped in the anti-colonial fervor sweeping Africa” (p.191).  Malcolm’s time in Africa, Joseph concludes, “changed his mind, body, and soul . . . The African continent intoxicated Malcolm X and informed his political dreams” (p.192-93).

By the time of their March 1964 meeting, moreover, the two men had begun to recognize each other’s potential.  After over a decade of forcefully criticizing the mainstream civil rights movement, Malcolm now recognized King’s goals as his own but chose different methods to get there.  Malcolm also had a subtle effect on King.  The “more he ridiculed and challenged King publicly,” Joseph writes, the more King “reaffirmed the strength of non-violence as a weapon of peace capable of transforming American democracy” (p.155).  King for his part had begun to look outside the rigidly segregated South and toward major urban centers in the North, Malcolm’s bailiwick, as possible sites of protest that would expand the freedom struggle beyond its southern roots.

Joseph cites three instances in which Malcolm extended written invitations to King, all of which went unanswered. But in early February 1965, after Malcolm had participated in a panel discussion with King’s wife, King concluded that the time had come to meet with his formidable peer.  Later that month, alas, Malcolm was gunned down in New York, almost certainly the work of the NOI, although details of the assassination remain murky to this day.

In the three years remaining to him after Malcolm’s assassination, King borrowed liberally from the black nationalist’s playbook, embracing in particular the notion of economic justice as a necessary component of full equality for African Americans.  Although he never wavered in his commitment to non-violence, King saw his cause differently after the uprising in the Watts section of Los Angeles in the summer of 1965.  Watts “transformed King,” Joseph writes, making clear that civil unrest in Northern cities was a “product of institutional racism and poverty that required far more material and political resources than ever imagined by the architects of the Great Society” (p.235).  King also began to speak out publicly in 1965 against the escalation of America’s military commitment in Vietnam, marking the beginning of the end of his close relationship with President Johnson.

King delivered his most pointed criticism of the war on April 4, 1967, precisely one year prior to his assassination, at the Riverside Church in New York City, abutting Harlem, Malcolm’s home base.  Linking the war to the prevalence of racism and poverty in the United States, King lamented the “cruel irony of watching Negro and white boys on TV screens as they kill and die together for a nation that has been unable to seat them together in the same schools.” (p.267).  Joseph terms King’s Riverside Church address the “boldest political decision of his career” (p.268).  It was the final turning point for King, marking his formal break with mainstream politics and his “full transition” from a civil rights leader to a “political revolutionary” who “refused to remain quiet in the face of domestic and international crises” (p.268).

After Riverside, in his last year, King became what Joseph describes as America’s “most well-known anti-war activist” (p.271).  King lent a Nobel Prize-winner’s prestige to a peace movement struggling to find its voice at a time when most Americans still supported the war.  Simultaneously, he pushed for federally guaranteed income, decent and racially integrated housing and public schools — what he termed a “revolution of values” (p.287).  During this period, Stokely Carmichael, who once worked with King in Mississippi (and is the subject of a Joseph biography), coined the term “Black Power.”  In Joseph’s view, the Black Power movement represented the natural extension of Malcolm’s political philosophy, post-Malcolm. Although King frequently criticized the movement in his final years, he nonetheless found himself in agreement with much of its agenda.

In his final months. King supported a Poor People’s march on Washington, D.C.  He was in Memphis, Tennessee in April 1968 on behalf of striking sanitation workers, overwhelmingly African-American, who held jobs but were seeking better salaries and more humane working conditions, when he too was felled by an assassin’s bullet.

* * *

After reading Joseph’s masterful synthesis, it is easy to imagine Malcolm supporting King’s efforts in Memphis that April.  And if the two men were still with us today, it is it is equally easy to imagine both embracing warmly the “Black Lives Matter” movement.

 

Thomas H. Peebles

La Châtaigneraie, France

April 20, 2021

 

 

 

7 Comments

Filed under American Politics, American Society, Political Theory, United States History

Digging Deeply Into The Idea of Democracy

 

James Miller, Can Democracy Work:

A Short History of a Radical Idea, From Ancient Athens to Our World

(Farrar, Strauss & Co.,) 

and

William Davies, Nervous States:

Democracy and the Decline of Reason

(WW Norton & Co.)

[NOTE: A condensed version of this review has also been posted to a blog known as Tocqueville 21: https:/tocqueville21.com/books/can-democracy-work.  Taking its name from the 19th century French aristocrat who gave Americans much insight into their democracy, Tocqueville 21 seeks to encourage in-depth thinking about democratic theory and practice, with particular but by no means exclusive emphasis on the United States and France.  The sight is maintained in connection with the American University of Paris’ Tocqueville Review and its Center for Critical Democracy Studies.  I anticipate regular postings on Tocqueville 21 going forward.]

Did American democracy survive the presidency of Donald Trump?  Variants on this question, never far from the surface during that four-year presidency, took on terrifying immediacy in the wake of the assault on the US Capitol this past January. The question seems sure to occupy historians, commentators and the public during the administration of Joe Biden and beyond.  If nothing else, the Trump presidency and now its aftermath bring home the need to dig deeply into the very idea of democracy, looking more closely at its history, theory, practice, and limitations, asking what are its core principles and what it takes to sustain them.  But we might shorten the inquiry to a single, pragmatic question: can democracy work?

This happens to be the title of James Miller’s Can Democracy Work: A Short History of a Radical Idea, From Ancient Athens to Our World.  But it could also be the title of William Davies’ Nervous States: Democracy and the Decline of Reason. The two works, both written during the Trump presidency, fall short of providing definitive or even reassuring answers to the question that Miller, professor of politics and liberal studies at New York’s New School for Social Research, has taken for his title.  But each casts enriching yet altogether different light on democratic theory and practice.

Miller’s approach is for the most part historical. Through a series of selected – and by his own admission “Eurocentric” (M.12) — case studies, he explores how the term “democracy” has evolved over the centuries, beginning with ancient Athens.  The approach of Davies, a political economist at Goldsmiths, University of London, is more difficult to categorize, but might be described as philosophical.  It is grounded in the legacy of 17th century philosophers René Descartes and Thomas Hobbes, his departure point for a complex and not always easy to follow explanation of the roots of modern populism, that combustible mixture of nostalgia, resentment, anger and fear that seemed to have triumphed at the time of the 2016 Brexit vote in Great Britain and the election of Donald Trump in the United States later that year.  Davies is most concerned about two manifestations of the “decline of reason,” his subtitle: the present day lack of confidence and trust in experts and democratically elected representatives; and the role of emotion and fear in contemporary politics.

Miller frames his historical overview with a paradox: despite blatant anti-democratic tendencies across the globe, a generalized notion of democracy as the most desirable form of government retains a strong hold on much, maybe most, of the world’s population.  From Myanmar and Hong Kong to the throng that invaded the US Capitol in January, nearly every public demonstration against the status quo utilizes the language of democracy.  Almost all the world’s political regimes, from the United States to North Korea, claim to embody some form of democracy.  “As imperfect as all the world’s systems are that claim to be democratic,” Miller writes, in today’s world the ideal of democracy is “more universally honored than ever before in human history” (M.211).

But the near-universal adhesion to this ideal is relatively recent, dating largely from the period since World War II, when the concept of democracy came to embrace self-determination of populations that previously had lived under foreign domination.  Throughout most of history, democracy was associated with the danger of mob rule, often seen as a “virtual synonym for violent anarchy” (M.59).   Modern democracy in Miller’s interpretation begins with the 18thcentury French and American Revolutions.  Revolts against the status quo are the heart of modern democracy, he contends.  They are not simply blemishes on the “peaceful forward march toward a more just society” (M.10).  Since the early 19th century, representative government, where voters elect their leaders  — “indirect democracy” – has come to be considered the only practical form of democratic governance for populous nation-states.

* * *

But in 5th and 4th century BCE Athens, where Miller’s case studies begin, what we now term direct democracy prevailed.  More than any modern democracy, a community of near absolute equality existed among Athenian citizens, even though citizenship was tightly restricted, open only to a fraction of the adult male population.  Many of Athens’ rivals, governed by oligarchs and aristocrats, considered the direct democracy practiced in Athens as a formula for mob rule, a view that persisted throughout the intervening centuries.  By the late 18th century, however, a competing view had emerged in France that some sort of democratic rule could serve as a check on monarchy and aristocracy.

In revolutionary Paris in early 1793, in the midst of the bloodiest phase of the French Revolution, the Marquis de Condorcet led the drafting of a proposed constitution that Miller considers the most purely democratic instrument of the 18th century and maybe of the two centuries since.  Condorcet’s draft constitution envisioned a wide network of local assemblies in which any citizen could propose legislation.  Although not implemented, the thinking behind Condorcet’s draft gave impetus to the notion of representative government as a system “preferable to, and a necessary check on, the unruly excesses of a purely direct democracy” (p.M.86).

The debate in the early 19th century centered on suffrage, the question of who gets to vote, with democracy proponents pushing to remove or lesson property requirements for extending the franchise to ever-wider segments of the (male) adult population.  A cluster of additional institutions and practices came to be considered essential to buttress an extended franchise, among them free and fair elections, protection of the human rights of all citizens, and adherence to the rule of law.  But Miller’s 19th century case studies are instances of short term set backs for the democratic cause: the failure of the massive popular movement known as Chartism to extend the franchise significantly in Britain in the 1840s; and the 1848 uprisings across the European continent, at once nationalist and democratic, which sought representative political institutions and something akin to universal male suffrage, but failed everywhere but in France to extend the franchise.

In the second half of the 19th century, moreover, proponents of democracy found themselves confronting issues of economic freedom and social justice in a rapidly industrializing Europe.  Karl Marx, for one, whose Communist Manifesto was published in 1848, doubted whether democracy – “bourgeois democracy,” he termed it – could alleviate widespread urban poverty and the exploitation of workers.  But the most spectacular failure among Miller’s case studies was the Paris Commune of 1871, which collapsed into disastrous violence amidst tensions between economic and political freedom.  Ironically, the fear of violence that the Commune unleashed led to a series of democratizing political reforms throughout Europe, with the right to vote extended to more male citizens.  The organization of workers into unions and the rise of political parties complemented extension of the franchise and contributed to the process of democratization in late 19th and early 20th century Europe.

In the United States, a case apart in Miller’s case studies, a genuinely democratic culture had taken hold by the 1830s, as the young French aristocrat Alexis de Tocqueville recognized during his famous 1831-32 tour, ostensibly to study prison conditions.  As early as the 1790s, there was a tendency to use the terms “republic” and “democracy” as synonyms for the American constitutional system, even though none of the drafters of the 1787 Constitution thought of himself as a democrat.  James Madison derided what he termed pure democracies, “which have ever been spectacles of turbulence and contention” (M.99).  The constitution’s drafters envisioned a representative government in which voters would select a “natural aristocracy,” as John Adams put it, comprising “men of virtue and talent, who would govern on behalf of all, with a dispassionate regard for the common good” (M.92).

The notion of a natural aristocracy all but disappeared when Andrew Jackson split Thomas Jefferson’s Democratic-Republican Party’s in two in his successful run for the presidency in 1828.  Running as a “Democrat,” Jackson confirmed that “democracy” from that point forward would be an “unambiguously honorific term in the American political lexicon” (M. 110), Miller writes.  It was during Jackson’s presidency that Tocqueville arrived in the United States.

Aware of how the institution of slavery undermined America’s democratic pretensions, Tocqueville nonetheless saw in the restlessness of Jacksonian America what Miller describes as a “new kind of society, in which the principle of equality was pushed to its limits” (M.115).  As practiced in America, democracy was a “way of life, and a shared faith, instantiated in other forms of association, in modes of thought and belief, in the attitudes and inclinations of individuals who have absorbed a kind of democratic temperament” (M.7).  Tocqueville nonetheless seemed to have had the Jacksonian style of democracy in mind when he warned against what he called “democratic despotism,” where a majority could override the rights and liberties of minorities.

Woodrow Wilson’s plea in 1917 to the US Congress that the United States enter World War I to “make the world safe for democracy” constitutes the beginning of the 20thcentury idea of democracy as a universal value, Miller argues.  But Wilson’s soaring faith in democracy turned out to be “astonishingly parochial” (M.176).  The post-World War I peace conferences in 1919 left intact the colonies of Britain and France, “under the pretext that the nonwhite races needed more time to become fully mature peoples, fit for democratic institutions” (M.190-91).

The Covenant of the League of Nations, the organization that Wilson hoped would be instrumental in preventing future conflict, “encouraged an expectation of self-determination as a new and universal political right” (M.191), even as the isolationist Congress thwarted Wilson’s plan for United States membership in the League.  For countries living under colonial domination, the expectation of self-determination was heightened after the more murderous World War II, particularly through the 1948 United Nations’ Universal Declaration of Human Rights.  Although a text without enforcement mechanisms, the declaration helped inspire human rights and independence movements across the globe.

Miller finishes by explaining why he remains attracted to modern attempts at direct democracy, resembling in some senses those of ancient Athens, particularly the notion of “participatory democracy” which influenced him as a young 1960s radical and which he saw replicated in the Occupy Wall Street Movement of ten years ago.  But direct democracy, he winds up concluding, is no more viable today than it was at the time of the French Revolution. It is not possible to create a workable participatory democracy model in a large, complex society.  Any “serious effort to implement such a structure will require a delegation of authority and the selection of representatives – in short the creation of an indirect democracy, and at some distance from most participants”  (M.232-33).

The Trump presidency, Miller argues, is best considered “not as a protest against modern democracy per se, but against the limits of modern democracy” (M.239).  Like Brexit, it expressed, in an “inchoate and potentially self-defeating” manner, a desire for “more democracy, for a larger voice for ordinary people” (M.240) – not unlike the participatory democracy campaigns of the 1960s.  At the time of Trump’s January 2017 inauguration, Miller appreciated that he remained free to “protest a political leader whose character and public policies I found repugnant.”  But he realized that he was “also expected to acknowledge, and peacefully coexist with, compatriots who preferred Trump’s policies and personal style.  This is a part of what it means to be a citizen in a liberal democracy” (M.240)  —  a portentous observation in light of the January 2021 assault on the US Capitol.

Democracies, Miller concludes, need to “explore new ways to foster a tolerant ethos that accepts, and can acknowledge, that there are many incompatible forms of life and forms of politics, not always directly democratic or participatory, in which humans can flourish” (M.234).  Although he doesn’t say so explicitly, this sounds much like an acknowledgement that present day populism is here to stay.  By an altogether different route, Davies reaches roughly the same conclusion.

* * *

Davies is far from the first to highlight the challenges to democracy when voters appear to abandon reason for emotion; nor the first to try to explain why the claims of government experts and elected representatives are met with increased suspicion and diminished trust today.  But he may be the first to tie these manifestations of the “decline of reason” to the disintegration of binary philosophical distinctions that Descartes and Hobbes established in the 17thcentury — Descartes between mind and body, Hobbes between war and peace.

For Descartes, the mind existed independently of the body.  Descartes was obsessed by the question whether what we see, hear, or smell is actually real.  He “treated physical sensations with great suspicion, in contrast to the rational principles belonging to the mind” (D.xiii).  Descartes gave shape to the modern philosophical definition of a rational scientific mind, Davies argues, but to do so, he had to discount sensations and feelings.  Hobbes, exhausted by the protracted religious Thirty Years War on the European continent and civil wars in England, argued that the central purpose of the state was to “eradicate feelings of mutual fear that would otherwise trigger violence” (D.xiii).  If people don’t feel safe, Hobbes seemed to contend, it “doesn’t matter whether they are objectively safe or not; they will eventually start to take matters into their own hands” (D.xvi).

Davies shows how Descartes and Hobbes helped create the conceptual foundation for the modern administrative state, fashioned by merchants who introduced “strict new rules for how their impressions should be recorded and spoke of, to avoid exaggeration and distortion, using numbers and public record-keeping” (D.xiii), not least for more efficient tax collection.  Using numbers in this pragmatic way, these 17th century merchants were the forerunners of what we today call experts, especially in the disciplines of statistics and economics, with an ability to “keep personal feelings separate from their observations” (D.xiii).

The conclusions of such experts, denominated and accepted as “facts,” established the value of objectivity in public life, providing a basis for consensus among people who otherwise have little in common.  Facts provided by economists, statisticians, and scientists thus have what for Hobbes was a peace-building function; they are “akin to contracts, types of promises that experts make to each other and the public, that records are accurate and free from any personal bias or political agenda” (D.124), Davies explains.  But if democracy is to provide effective mechanisms for the resolution of disputes and disagreements, there must be “some commonly agreed starting point, that all are willing to recognize,” he warns. “Some things must be outside politics, if peaceful political disputes are to be possible” (D.62).

Davies makes the bold argument that the rise of emotion in contemporary politics and the inability of experts and facts to settle disputes today are the consequences of the break down of the binary distinctions of Descartes and Hobbes.  The brain, through rapid advances in neuroscience, rather than Descartes’ concept of mind, has become the main way we have come to understand ourselves, demonstrating the “importance of emotion and physiology to all decision making” (D.xii).  The distinction between war and peace has also become less clear-cut since Hobbes’ time.

Davies is concerned particularly with how the type of knowledge used in warfare has been coopted for political purposes. Warfare knowledge doesn’t have the luxury of “slow, reasonable open public debate of the sort that scientific progress has been built upon.”  It is “shrouded in secrecy, accompanied by deliberate attempts to deceive the enemy. It has to be delivered at the right place and right time” (D.124), with emotions playing a crucial role.  Military knowledge is thus weaponized knowledge.  Political propaganda has all the indicia of military knowledge at work for political advantage.  But so does much of today’s digital communication.  Political argument conducted online “has come to feel more like conflict” (D.193), Davies observes, with conspiracy theories in particular given wide room to flourish.

The upshot is that democracies are being transformed today by the power of feeling and emotion, in “ways that cannot be ignored or reversed” (D. xvii-xviii).  Objective claims about the economy, society, the human body and nature “can no longer be successfully insulated from emotions”  (D.xiv).  While we can lament the decline of modern reason, “as if emotions have overwhelmed the citadel of truth like barbarians” (D.xv), Davies suggests that we would do better to “value democracy’s capacity to give voice to fear, pain and anxiety that might otherwise be diverted in far more destructive directions”  (D.xvii).

Yet Davies leaves unanswered the question whether there are there limits on the forms of fear, pain and anxiety to which democracy should give voice.  He recognizes the potency of nationalism as a “way of understanding the life of society in mythical terms” (D.87).  But should democracy strive to give voice to nationalism’s most xenophobic and exclusionary forms?  Nowhere does he address racism which, most social scientists now agree, was a stronger contributing factor to the 2016 election of Donald Trump than economic disparity, and it is difficult to articulate any rationale for giving racism a voice in a modern democracy.

In countering climate change skepticism, a primary example of popular mistrust of expert opinion and scientific consensus, Davies rejects renewed commitment to scientific expertise and rational argument – “bravado rationalism,” he calls it  — as insufficient to overcome the “liars and manipulators” (D.108) who cast doubt on the reality of climate change.  But he doesn’t spell out what would be sufficient. The book went to press prior to the outbreak of the Coronavirus pandemic.  Were Davies writing today, he likely would have addressed similar resistance to expert claims about fighting the pandemic, such as the efficacy of wearing masks.

Writing today, moreover, Davies might have used an expression other than “barbarians storming the citadel of truth,” an expression that now brings to mind last January’s assault on the US Capitol.  While those who took part in the assault itself can be dealt with through the criminal justice process, with all the due process protections that a democracy affords accused law breakers, an astounding number of Americans who did not participate remain convinced that, despite overwhelming empirical evidence to the contrary, Joe Biden and the Democrats “stole” the 2020 presidential election from Donald Trump.

* * *

How can a democracy work when there is widespread disagreement with an incontrovertible fact, especially one that goes to democracy’s very heart, in this case the result of the vote and the peaceful transfer of power after an orderly election?  What if a massive number of citizens refuse to accept the obligation that Miller felt when his candidate lost in 2016, to acknowledge and peacefully coexist with the winning side?  Davies’ trenchant but quirky analysis provides no obvious solution to this quandary.  If we can find one, it will constitute an important step in answering the broader question whether American democracy survived the Trump presidency.

 

Thomas H. Peebles

La Châtaigneraie, France

March 17, 2021

 

7 Comments

Filed under American Politics, History, Intellectual History, Political Theory, United States History

Liberals, Where Are They Coming From?

 

Helena Rosenblatt, The Lost History of Liberalism: From Ancient Rome

To the Twenty-First Century

(Princeton University Press) 

             If you spent any time watching or listening to the political conventions of the two major American parties last month,  you probably did not hear the word “liberal” much, if at all, during the Democratic National Convention.  But you may have heard the word frequently at the Republican National Convention, with liberalism perhaps described as something akin to a “disease or a poison,” or a danger to American “moral values.”  These, however, are not the words of Donald Trump Jr. or Rudy Giuliani, but rather of Helena Rosenblatt, a professor at the Graduate Center, City University of New York, in The Lost History of Liberalism: From Ancient Rome to the Twenty-First Century (at p.265).  American Democrats, Rosenblatt further notes, avoid using the word “liberal” to describe themselves “for fear that it will render them unelectable” (p.265). What the heck is wrong with being a “liberal”? What is “liberalism” after all?

Rosenblatt argues that we are “muddled” about what we mean by “liberalism”:

People use the term in all sorts of different ways, often unwittingly, sometime intentionally. They talk past each other, precluding any possibility of reasonable debate. It would be good to know what we are speaking about when we speak about liberalism (p.1).

Clarifying the meaning of the terms “liberal” and “liberalism” is the lofty goal Rosenblatt sets for herself in this ambitious work, a work that at its heart is an etymological stud — a “word history of liberalism” (p.3) — in which she explores how these two terms have evolved in political and social discourse over the centuries, from Roman to present times.

The word “liberal,” Rosenblatt argues, took on an overtly political connotation only in the early 19th century, in the aftermath of the French Revolution. Up until that time, beginning with the Roman authors Cicero and Seneca, through the medieval and Renaissance periods in Europe, “liberal” was a word referring to one’s character.  Being “liberal” meant demonstrating the “virtues of a citizen, showing devotion to the common good, and respecting the importance of mutual connectedness” (p.8-9).  During the 18th century Enlightenment, the educated public began for the first time to speak not only of liberal individuals but also of liberal sentiments, ideas, ways of thinking, even constitutions.

Liberal political principles emerged as part of an effort to safeguard the achievements of the French Revolution and to protect them from the forces of extremism — from the revolution’s most radical proponents on one side to its most reactionary opponents on the other.  These principles included support for the broad ideals of the French Revolution, “liberté, égalité, fraternité;” opposition to absolute monarchy and aristocratic and ecclesiastical privilege; and such auxiliary concepts as popular sovereignty, constitutional and representative government, the rule of law and individual rights, particularly freedom of the press and freedom of religion.  Beyond that, what could be considered a liberal principle was “somewhat vague and debatable” (p.52).

Rosenblatt is strongest on how 19th century liberalism evolved, particularly in France and Germany, but also in Great Britain and the United States.  France and French thinkers were the center points in the history of 19th century liberalism, she contends, while Germany’s contributions are “usually underplayed, if not completely ignored” (p.3).  More cursory is her treatment of liberalism in the 20th century, packed into the last two of eight chapters and an epilogue.  The 20th century in her interpretation saw the United States and Great Britain become centers of liberal thinking, eclipsing France and Germany.  But since World War II, she argues, liberalism as defined in America has limited itself narrowly to the protection of individual rights and interests, without the moralism or  dedication to the common good that were at the heart of 19th and early 20th century liberalism.

From the early 19th century through World War II, Rosenblatt insists, liberalism had “nothing to do with the atomistic individualism we hear of today.”  For a century and a half, most liberals were “moralists” who “never spoke about rights without stressing duties” (p.4).  People have rights because they have duties.  Liberals rejected the idea that a viable community could be “constructed on the basis of self-interestedness alone” (p.4).  Being a liberal meant “being a giving and a civic-minded citizen; it meant understanding one’s connectedness to other citizens and acting in ways conducive to the common good” (p.3-4).  The moral content to the political liberalism that emerged after the French Revolution constitutes the “lost” aspect of the history that Rosenblatt seeks to bring to light.

Throughout much of the 19th century, however, being a liberal did not mean being a democrat in the modern sense of the term.  Endorsing popular sovereignty, as did most early liberals, did not mean endorsing universal suffrage.  Voting was a trust, not a right.  Extending suffrage beyond property-holding males was an invitation to mob rule.  Only toward the end of the century did most liberals accept expansion of the franchise, as liberalism gradually became  synonymous with democracy, paving the way for the 20th century term “liberal democracy.”

While 19th century liberalism was often criticized as opposed to religion, Rosenblatt suggests that it would be more accurate to say that it opposed the privileged position of the Catholic Church and aligned more easily with Protestantism, especially some forms emerging in Germany (although a small number of 19th century Catholic thinkers could also claim the term liberal).  But by the middle decades of the 19th century, liberalism’s challenges included not only the opposition of monarchists and the Catholic Church, but also what came to be known as “socialism” — the political movements representing a working class that was “self-conscious, politicized and angry” (p.101) as the Industrial Revolution was changing the face of Europe.

Liberalism’s response to socialism gave rise in the second half of the 19th century to the defining debate over its nature: was liberalism compatible with socialist demands for government intervention in the economy and direct government assistance to the working class and the destitute?  Or were the broad objectives of liberalism better advanced by the policies of economic laissez faire, in which the government avoided intervention in the economy and, as many liberals advocated, rejected what was termed “public charity” in favor of concentrating upon the moral improvement of the working classes and the poor so that they might lift themselves out of poverty?  This debate carried over into the 20th century and, Rosenblatt indicates, is still with us.

* * *

With surprising specificity, Rosenblatt attributes the origins of modern political liberalism to the work of the Swiss couple Benjamin Constant and his partner Madame de Staël, born Anne-Louise Germaine Necker, the daughter of Jacques Necker, a Swiss banker who served as finance minister to French King Louis XIV (Rosenblatt is also the author of a biography of Constant).  The couple arrived in Paris from Geneva in 1795, a year after the so-called Reign of Terror had ended with the execution of its most prominent advocate, Maximilien Robespierre.  As they reacted to the pressing circumstances brought about by the revolution, Rosenblatt contends, Constant and de Staël formulated the cluster of ideas that collectively came to be known as “liberalism,” although neither ever termed their ideas “liberal.”  Constant, the “first theorist of liberalism” (p.66), argued that it was not the “form of government that mattered,” but rather the amount. “Monarchies and republics could be equally oppressive. It was not to whom you granted political authority that counted, but how much authority you granted.  Political power is dangerously corrupting” (p.66).

Influenced in particular by several German theologians, Constant spoke eloquently about the need for a new and more enlightened version of Protestantism in the liberal state.  Religion was an “essential moralizing force” that “inspired selflessness, high-minded principles, and moral values, all crucial in a liberal society. But it mattered which religion, and it mattered what its relationship was to the state” (p.66).  A liberal government needed to be based upon religious toleration, that is, the removal of all legal disabilities attached to the faith one professed.  Liberalism envisioned strict separation of church and state and what we would today call “secularism,” ideas that placed it in direct conflict with the Catholic Church throughout the 19th century.

Constant and Madame de Staël initially supported Napoleon Bonaparte’s 1799 coup d’état.  They hoped Napoleon would thwart the counterrevolution and consolidate and protect the core liberal principles of the revolution. But as Napoleon placed the authority of the state in his own hands, pursued wars of conquest abroad, and allied himself with the Catholic Church, Constant and Madame de Staël became fervent critics of his increasingly authoritarian rule.

After Napoleon fell from power in 1815, an aggressive counter-attack on liberalism took place in France, led by the Catholic Church, in which liberals were accused of trying to “destroy religion, monarchy, and the family.  They were not just misguided but wicked and sinful.  Peddlers of heresy, they had no belief in duty, no respect for tradition or community.  In the writings of counter-revolutionaries, liberalism became a virtual symbol for atheism, violence, and anarchy” (p.68).  English conservative commentators frequently equated liberalism with Jacobinism.  For these commentators, liberals were “proud, selfish and licentious,” primarily interested in the “unbounded gratification of their passions” while refusing “restraints of any kind” (p.76).

Liberals hopes were buoyed, however, when the  bloodless three day 1830 Revolution in France deposed the ultra-royalist and strongly pro-Catholic Charles X in favor of the less reactionary Louis Philippe.  Among those initially supporting the 1830 Revolution was Alexis de Tocqueville, 19th century France’s most consequential liberal thinker after Constant and Madame de Staël.  Tocqueville famously toured the United States in the 1830s and offered his perspective on the country’s direction in Democracy in America, published in two volumes in 1835 and 1840, followed by his analysis in 1856 of the implications of the French Revolution, The Old Regime and the Revolution.

Tocqueville shared many of the widespread concerns of his age about democracy, especially its tendency to foster egoism and individualism.  He worried about the masses’ lack of “capacity.” He was one of the first to warn against what he called “democratic despotism,” where majority sentiment would be in a position to override the rights and liberties of minorities.  But Tocqueville also foresaw the forward march of democracy and the movement toward equality of all citizens as unstoppable, based primarily upon what he had observed in the United States (although he was aware of how the institution of slavery undermined American claims to be a society of equals).  Tocqueville counseled liberals in France not to try to stop democracy, but, as Rosenblatt puts it, to “instruct and tame” democracy, so that it “did not threaten liberty and devolve into the new kind of despotism France had seen under Napoleon” (p.95).

Tocqueville’s concerns about democracy and “excessive” equality were related to anxieties about how to accommodate the diverse movements that termed themselves socialist.  Initially, Rosenblatt stresses, the term socialist described “anyone who sympathized with the plight of the working poor . . . [T]here was no necessary contradiction between being liberal and being socialist” (p.103).   The great majority of mid-19th liberals, she notes, whether British, French, or German, believed in free circulation of goods, ideas and persons but were “not all that adverse to government intervention” and did not advocate “absolute property rights” (p.114).

In the last quarter of the 19th century, a growing number of British liberals began to favor a “new type of liberalism” that advocated “more government intervention on behalf of the poor.  They called for the state to a take action to eliminate poverty, ignorance and disease, and the excessive inequality in the distribution of wealth .  They began to say that people should be accorded not just freedom, but the conditions of freedom” (p. p.226).   French commentators in the same time period began to urge that a middle way be forged between laissez-faire and socialism, termed “liberal socialism,” where the state became an “instrument of civilization” (p.147).

But it was in 1870s Germany where the debate crystalized between what came to be known as “classical” laissez faire liberalism and the “progressive” version, thanks in large part to the unlikely figure of Otto von Bismarck.   Although no liberal, Bismarck, who masterminded German unification in 1871 and served as the first Chancellor of the newly united nation, instituted a host of sweeping social welfare reforms for workers, including full and comprehensive insurance against sickness, industrial accidents, and disability.  Most historians attribute his social welfare measures to a desire to coopt and destroy the German socialist movement (a point Jonathan Steinberg makes in his masterful Bismarck biography, reviewed here in 2013).

Bismarck’s social welfare measures coincided with an academic assault on economic laissez faire led by a school of “ethical economists,” a small band of German university professors who attacked laissez faire with arguments that were empirical but also moral, based on a view of man as not a “solitary, self-interested individual” but a “social being with ethical obligations “(p.222).  Laissez-faire “allowed for the exploitation of workers and did nothing to remedy endemic poverty,” they contended, “making life worse, not better, for the majority of the inhabitants of industrializing countries” (p.222).  Industrial conditions would “only deteriorate and spread if governments took no action” (p.222).

In the late 19th and early 20th centuries, many young Americans studied in Germany under the ethical economists and their progeny.  They returned to the United States “increasingly certain that laissez-faire was simply wrong, both morally and empirically,” and “began to advocate more government intervention in the economy” (p.226).  On both sides of the Atlantic, liberalism and socialism were drawing closer together, but the debate between laissez faire liberalism and the interventionist version played out primarily on the American side.

* * *

During World War I, Rosenblatt argues, liberalism, democracy and Western civilization became “virtually synonymous,” with America, because of its rising strength, “cast as their principal defender” (p.258).  Germany’s contribution to liberalism was progressively forgotten or pushed aside and the French contribution minimalized.  Two key World War I era American thinkers, Herbert Croly and John Dewy, contended that only the interventionist, or progressive, version of liberalism could claim to be truly liberal.

Croly, cofounder of the flagship progressive magazine The New Republic, delivered a stinging indictment of laissez-faire economics and a strong argument for government intervention in his 1909 work, The Promise of American Life.  By 1914, Croly had begun to call his own ideas liberal, and by mid-1916 the term was in common use in The New Republic as “another way to describe progressive legislation” (p.246).

The philosopher John Dewey acknowledged that there were “two streams” of liberalism.  But one was more humanitarian and therefore open to government intervention and social legislation, while the other was “beholden to big industry, banking, and commerce, and was therefore committed to laissez-faire” (p.261).  American liberalism, Dewey contended, had nothing with laissez-faire, and never had.  Nor did it have anything to do with what was called the “gospel of individualism.”  American liberalism stood for “‘liberality and generosity, especially of mind and character.’ Its aim was to promote greater equality and to combat plutocracy with the aid of government” (p.261).

Rosenblatt credits President Franklin D. Roosevelt’s New Deal with demonstrating how progressive liberalism could work in the political arena. Roosevelt, 20th century America’s most talented liberal practitioner, consistently claimed the moral high ground for liberalism.  He argued that liberals believed in “generosity and social mindedness and were willing to sacrifice for the public good” (p.261).  For Roosevelt, the core of the liberal faith was a belief in the “effectiveness of people helping each other” (p.261). But despite his high-minded advocacy for progressive liberalism – buttressed by his leadership of the country during the Great Depression and in World War II – Roosevelt did not vanquish the argument that economic laissez faire constituted the “true” liberalism.

In 1944, with America at war with Nazi Germany and Roosevelt within months of unprecedented fourth term, the eminent Austrian economist Friedrich Hayek, then teaching at the London School of Economics, published The Road to Serfdom, the 20th century’s most concerted intellectual challenge to the interventionist strand of liberalism.  Any sort of state intervention or “collectivist experiment” threatened individual liberty and put countries on a slippery slope to fascism, Hayek argued in his surprise best seller.  Hayek grounded his arguments in English and American notions of individual freedom.  “Progressive liberalism,” which he considered a contradiction in terms, had its roots in Bismarck’s Germany, he argued, and leads ineluctably to totalitarianism.  “[I]t is Germany whose fate we are in some danger of repeating” (p.268), Hayek warned his British and American readers in 1944.

Although Hayek always insisted that he was a liberal, his ideas became part of the American post World War II conservative argument against both fascism and communism (meanwhile, in France laissez faire economics became synonymous with liberalism; “liberal” is a political epithet in today’s France, but means a free market advocate, diametrically opposed to its American meaning).  During the anti-Communist fervor of the Cold War that followed World War II, the interventionist liberalism that Croly and Dewey had preached and Roosevelt had put into practice was labeled “socialist” and even “communist.”  To American conservatives, those who accepted the interventionist version of liberalism were not really liberal; they were “totalitarian.”

* * *

The intellectual climate of the Cold War bred defensiveness in American liberals, Rosenblatt argues, provoking a need to “clarify and accentuate what made their liberalism not totalitarianism. It was in so doing that they toned down their plans for social reconstruction and emphasized, rather, their commitment to defending the rights of individuals” (p.271).  Post World War II American liberalism thus lost “much of its moral core and centuries-long dedication to the public good.  Individualism replaced it as liberals lowered their sights and moderated their goals” (p.271).  In bowing to Cold War realities, American liberals in the second half of the 20th century “willingly adopted the argument traditionally used to malign them . . . that liberalism was, at its core, an individualist, if not selfish, philosophy” (p.273).   Today, Rosenblatt finds, liberals “overwhelmingly stress a commitment to individual rights and choices; they rarely mention duties, patriotism, self-sacrifice, or generosity to others” (p.265-66).

Unfortunately, Rosenblatt provides scant elaboration for these provocative propositions, rendering her work incomplete.  A valuable follow up to this enlightening and erudite volume could concentrate on how the term “liberalism” has evolved over the past three quarters of a century, further helping us out of the muddle that surrounds the term.

Thomas H. Peebles

La Châtaigneraie, France

September 7, 2020

 

3 Comments

Filed under American Politics, English History, European History, France, French History, German History, History, Intellectual History, Political Theory

Reading Darwin in Abolitionist New England

 

Randall Fuller, The Book That Changed America:

How Darwin’s Theory of Evolution Ignited a Nation (Viking)

In mid-December 1859, the first copy of Charles Darwin’s On the Origin of Species arrived in the United States from England at a wharf in Boston harbor.  Darwin’s book explained how plants and animals had developed and evolved over multiple millennia through a process Darwin termed “natural selection,” a process which distinguished On the Origins of Species from the work of other naturalists of Darwin’s generation.   Although Darwin said little in the book about how humans fit into the natural selection process, the work promised to ignite a battle between science and religion.

In The Book That Changed America: How Darwin’s Theory of Evolution Ignited a Nation, Randall Fuller, professor of American literature at the University of Kansas, contends that what made Darwin’s insight so radical was its “reliance upon a natural mechanism to explain the development of species.  An intelligent Creator was not required for natural selection to operate.  Darwin’s’ vision was of a dynamic, self-generation process of material change.  That process was entirely arbitrary, governed by physical law and chance – and not leading ineluctably . . . toward progress and perfection” (p.24).  Darwin’s work challenged the notion that human beings were a “separate and extraordinary species, differing from every other animal on the planet. Taken to its logical conclusion, it demolished the idea that people had been created in God’s image” (p.24).

On the Origins of Species arrived in the United States at a particularly fraught moment.  In October 1859, abolitionist John Brown had conducted a raid on a federal arsenal in Harper’s Ferry (then part of Virginia, today West Virginia), with the intention of precipitating a rebellion that would eradicate slavery from American soil.  The raid failed spectacularly: Brown was captured, tried for treason and hung on December 2, 1859.  The raid and its aftermath exacerbated tensions between North and South, further polarizing the already bitterly divided country over the issue of chattel slavery in its southern states.  Notwithstanding the little Darwin had written about how humans fit into the natural selection process, abolitionists seized on hints in the book that all humans were biologically related to buttress their arguments against slavery.  To the abolitionists, Darwin “seemed to refute once and for all the idea that African American slaves were a separate, inferior species” (p.x).

Asa Gray, a respected botanist at Harvard University and a friend of Darwin, received the first copy of On the Origin of Species in the United States.  He passed the copy, which he annotated heavily, to his cousin by marriage  Charles Loring Brace (who was also a distant cousin of Harriet Beecher Stowe, author of the anti-slavery runaway best-seller Uncle Tom’s Cabin).  Brace in turn introduced the book to three men: Franklin Benjamin Sanborn, a part-time school master and full-time abolitionist activist; Amos Bronson Alcott, an educator and loquacious philosopher, today best remembered as the father of author Louisa May Alcott; and Henry David Thoreau, one of America’s best known philosophers and truth-seekers.  Sanborn, Alcott and Thoreau were residents of Concord, Massachusetts, roughly twenty miles north of Boston, the site of a famous Revolutionary War battle but in the mid-19th century both a leading literary center and a hotbed of abolitionist sentiment.

As luck would have it, Brace, Alcott and Thoreau gathered at Sanborn’s Concord home on New Year’s Day 1860.  Only Gray did not attend. The four men almost certainly shared their initial reactions to Darwin’s work.   This get together constitutes the starting point for Fuller’s engrossing study, centered on how Gray and the four men in Sanborn’s parlor on that New Year’s Day  absorbed Darwin’s book.   Darwin himself is at best a background figure in the study.  Several familiar figures make occasional appearances, among them:  Frederick Douglass, renowned orator and “easily the most famous black man in America” (p.91); Bronson Alcott’s author-daughter Louisa May; and American philosophe Ralph Waldo Emerson, Thoreau’s mentor and friend.  Emerson, like Louisa May and her father, was a Concord resident, and Fuller’s study takes place mostly there, with occasional forays to nearby Boston and Cambridge.

Fuller’s study is therefore more tightly circumscribed geographically than its title suggests.  He spends little time detailing the reaction to Darwin’s work in other parts of the United States, most conspicuously in the American South, where any work that might seem to support abolitionism and undermine slavery was anathema.   The study is also circumscribed in time; it takes place mostly in 1860, with most of the rest confined to the first half of the 1860s, up to the end of the American Civil War in 1865.  Fuller barely mentions what is sometimes called “Social Darwinism,” a notion that gained traction in the decades after the Civil War that purported to apply Darwin’s theory of natural selection to the competition between individuals in politics and economics, producing an argument for unregulated capitalism.

Rather, Fuller charts out the paths each of his five main characters traversed in absorbing and assimilating into their own worldviews the scientific, religious and political ramifications of Darwin’s work, particularly during the tumultuous year 1860.   All five were fervent abolitionists.   Sunburn was a co-conspirator in John Brown’s raid.  Thoreau gave a series of eloquent, impassioned speeches in support of Brown.  All were convinced that Darwin’s notion of natural selection had provided still another argument against slavery, based on science rather than morality or economics.  But in varying degrees, all five could also be considered adherents of transcendentalism, a mid-19th century philosophical approach that posited a form of human knowledge that goes beyond, or transcends, what can be seen, heard, tasted, touched or felt.

Although transcendentalists were almost by definition highly individualistic, most believed that a special force or intelligence stood behind nature and that prudential design ruled the universe.  Many subscribed to the notion that humans were the products of some sort of “special creation.”   Most saw God everywhere, and considered the human mind “resplendent with powers and insights wholly distinct from the external world” (p.54).  Transcendentalism was both an effort to invoke the divinity within man and, as Fuller puts it, also “cultural attack on a nation that had become too materialistic, too conformist, too smug about its place in history” (p.66).

Transcendentalism thus hovered in the background in 1860 as all but Sanborn wrestled with the implications of Darwinism (Sanborn spent much of the year fleeing federal authorities seeking his arrest for his role in John Brown’s raid).  Alcott never left transcendentalism, rejecting much of Darwinism.  Gray and Brace initially seemed to embrace Darwinian theories wholeheartedly, but in different ways each pulled back once he fully grasped the full implications of those theories.   Thoreau was the only one of the five who accepted wholly Darwinism’s most radical implications, using Darwin’s theories to “redirect his life’s work” (p.ix).

Fuller’s study thus combines a deep dive into the New England abolitionist milieu at a time when the United States was fracturing over the issue of slavery with a medium level dive into the intricacies of Darwin’s theory of natural selection.   But the story Fuller tells is anything but dry and abstract.  With an elegant writing style and an acute sense of detail, Fuller places his five men and their thinking about Darwin in their habitat, the frenetic world of 1860s New England.  In vivid passages, readers can almost feel the chilly January wind whistling through Franklin Sanborn’s parlor that New Year’s Day 1860, or envision the mud accumulating on Henry David Thoreau’s boots as he trudges through the melting snow in the woods on a March afternoon contemplating Darwin.  The result is a lively, easy-to-read narrative that nimbly mixes intellectual and everyday, ground-level history.

* * *

Bronson Alcott, described by Fuller as America’s most radical transcendentalist, never accepted the premises of On the Origins of Species.  Darwin had, in Alcott’s view, “reduced human life to chemistry, to mechanical processes, to vulgar materialism” (p.10).  To Alcott, Darwin seemed “morbidly attached to an amoral struggle of existence, which robbed humans of free will and ignored the promptings of the soul” (p.150). Alcott could not imagine a universe “so perversely cruel as to produce life without meaning.  Nor could he bear to live in a world that was reduced to the most tangible and daily phenomena, to random change and process”(p.188).  Asa Gray, one of America’s most eminent scientists, came to the same realization, but  only after thoroughly digesting Darwin and explaining his theories to a wide swath of the American public.

Gray’s initial reaction to Darwin’s work was one of unbounded enthusiasm.  Gray covered nearly every page of the book with his own annotations.  He admired the book because it “reinforced his conviction that inductive reasoning was the proper approach to science” (p.109).  He also admired the work’s “artfully modulated tone, [and] its modest voice, which softened the more audacious ideas rippling through the text” (p.17). Gray was most impressed with Darwin’s “careful judging and clear-eyed balancing of data” (p.110).  To grapple with Darwin’s ideas, Gray maintained, one had to “follow the evidence wherever it led, ignoring prior convictions and certainties or the narrative one wanted that evidence to confirm” (p.110).  Without saying so explicitly, Gray suggested that readers of Darwin’s book had to be “open to the possibility that everything they had taken for granted was in fact incorrect” (p.110).

Gray reviewed On the Origins of Species for the Atlantic Monthly in three parts, appearing  in the summer and fall of 1860.  Gray’s articles served as the first encounter with Darwin for many American readers.  The articles elicited a steady stream of letters from respectful readers.  Some responded with “unalloyed enthusiasm” for a new idea which “seemed to unlock the mysteries of nature” (p.134).  Others, however, “reacted with anger toward a theory that proposed to unravel . . . their belief in a divine Being who had placed humans at the summit of creation” (p.134).  But as Gray finished the third Atlantic article, he began to realize that he himself was not entirely at ease with the diminution of humanity’s place in the universe that Darwin’s work implied.

The third Atlantic article, appearing in October 1860, revealed Gray’s increasing difficulty in “aligning Darwin’s theory with his own religions convictions” (p.213).   Gray proposed that natural selection might be the “God’s chosen method of creation” (p.214).  This idea seemed to resolve the tension between scientific and religious accounts of origins, making Gray the first to develop a theological case for Darwinian theory.  But the idea that natural selection might be the process by which God had fashioned  the world represented what Fuller describes as a “stunning shift for Gray. Before now, he had always insisted that secondary causes were the only items science was qualified to address.  First, or final causes – the beginning of life, the creation of the universe – were the purview of religion: a matter of faith and metaphysics” (p.214).  Darwin responded to Gray’s conjectures by indicating that, as Fuller summarizes the written exchange, the natural world was “simply too murderous and too cruel to have been created by a just and merciful God” (p.211).

In the Atlantic articles, Fuller argues, Gray leapt “beyond his own rules of science, speculating about something that was untestable” (p.214-15 ).  Gray must have known that his argument “failed to adhere to his own definition of science” (p.216).  But, much like Bronson Alcott, Gray found it “impossible to live in the world Darwin had imagined: a world of chance, a world that did not require a God to operate” (p.216).  Charles Brace, a noted social reformer who founded several institutions for orphans and destitute children, greeted Darwin’s book  with an initial enthusiasm that rivaled that of Gray.

Brace  claimed to have read On the Origins of Species 13 times.  He was most attracted to the book for its implications for human societies, especially for American society, where nearly half the country accepted and defended human slavery.  Darwin’s book “confirmed Brace’s belief that environment played a crucial role in the moral life of humans” (p.11), and demonstrated that every person in the world, black, white, yellow, was related to every one else.  The theory of natural selection was thus for Brace the “latest argument against chattel slavery, a scientific claim that could be used in the most important controversy of his time, a clarion call for abolition” (p.39).

Brace produced a tract entitled The Races of the Old World, modeled after Darwin’s On the Origin of Species, which Fuller describes as a “sprawling, ramshackle work” (p.199).  Its central thesis was simple enough: “There is nothing . . . to prove the negro radically different from the other families of man or even mentally inferior to them” (p.199-200).  But much of The Races of the Old World seemed to undercut Brace’s central thesis.  Although the book never defined the term “race,” Brace “apparently believed that though all humans sprang from the same source, some races had degraded over time . . . Human races were not permanent” (p.199-200).  Brace thus struggled to make Darwin’s theory fit his own ideas about race and slavery. “He increasingly bent facts to fit his own speculations” (p.197), as Fuller puts it.

The Races of the Old World revealed Brace’s hesitation in imagining a multi-racial America. He couched in Darwinian terms the difficulty of the races cohabiting,  reverting to what Fuller describes as nonsense about blacks not being conditioned to survive in the colder Northern climate.  Brace “firmly believed in the emancipation of slaves, and he was equally convinced that blacks and white did not differ in their mental capacities” (p.202).  But he nonetheless worried that “race mixing,” or what was then termed race “amalgamation,” might imperil Anglo-Saxon America, the “apex of development. . . God’s favored nation, a place where democracy and Christianity had fused to create the world’s best hope” (p.202).  Brace joined many other leading abolitionists in opposing race “amalgamation.”  His conclusion that “black and brown-skinned people inhabited a lower run on the ladder of civilization” was shared, Fuller indicates, by “even the most enlightened New England abolitionists” (p.57).

No such misgivings visited Thoreau, who  grappled with On the Origins of Species “as thoroughly and as insightfully as any American of the period” (p.11).  As Thoreau first read his copy of the book in late January 1860,  a “new universe took form on the rectangular page before him” (p.75).  Prior to his encounter with Darwin, Thoreau’s thought had often “bordered on the nostalgic.  He longed for the transcendentalist’s confidence in a natural world infused with spirit” (p.157).  But Darwin led Thoreau beyond nostalgia.

Thoreau was struck in particular by Darwin’s portrayal of the struggle among species as an engine of creation.  The Origin of Species revealed nature as process, in constant transformation.  Darwin’s book directed Thoreau’s attention “away from fixed concepts and hierarchies toward movement instead” (p.144-45).  The idea of struggle among species “undermined transcendentalist assumptions about the essential goodness of nature, but it also corroborated many of Thoreau’s own observations” (p.137).  Thoreau had “long suspected that people were an intrinsic part of nature – neither separate nor entirely alienated from it” (p.155).  Darwin now enabled Thoreau to see how “people and the environment worked together to fashion the world,” providing a “scientific foundation for Thoreau’s belief that humans and nature were part of the same continuum” (p.155).

Darwin’s natural selection, Thoreau wrote, “implies a greater vital force in nature, because it is more flexible and accommodating, and equivalent to a sort of constant new creation” (p.246).  The phrase “constant new creation” in Fuller’s view represents an “epoch in American thought” because it “no longer relies upon divinity to explain the natural world” (p.246).  Darwin thus propelled Thoreau to a radical vision in which there was “no force or intelligence behind Nature, directing its course in a determined and purposeful manner.  Nature just was” (p.246-47).

How far Thoreau would have taken these ideas is impossible to know. He became sick in December 1860, stricken with influenza, exacerbated by tuberculosis, and died in June 1862, with Americans fighting other Americans on the battlefield over the issue of slavery.

* * *

            Fuller compares Darwin’s On the Origin of Species to a Trojan horse.  It entered American culture “using the newly prestigious language of science, only to attack, once inside, the nation’s cherished beliefs. . . With special and desolating force, it combated the idea that God had placed humans at the peak of creation” (p.213).  That the book’s attack did not spare even New England’s best known abolitionists and transcendentalists demonstrates just how unsettling the attack was.

Thomas H. Peebles

La Châtaigneraie, France

May 18, 2020

 

10 Comments

Filed under American Society, History, Political Theory, Religion, Science, United States History

A Defense of Truth

 

Dorian Lynskey, The Ministry of Truth:

The Biography of George Orwell’s 1984 

                           George Orwell’s name, like that of William Shakespeare, Charles Dickens and Franz Kafka, has given rise to an adjective.  “Orwellian” connotes official deception, secret surveillance, misleading terminology, and the manipulation of history.   Several terms used in Orwell’s best known novel, Nineteen Eighty Four, have entered into common usage, including “doublethink,” “thought crime,” “newspeak,” “memory hole,” and “Big Brother.”  First published in June 1949, a little over a half year prior to Orwell’s death in January 1950, Nineteen Eighty Four is consistently described as a “dystopian” novel – a genre of fiction which, according to Merriam-Webster, pictures “an imagined world or society in which people lead wretched, dehumanized, fearful lives.”

This definition fits neatly the world that Orwell depicted in Nineteen Eighty Four, a world divided between three inter-continental super states perpetually at war, Oceania, Eurasia and Eastasia, with Britain reduced to a province of Oceania bearing the sardonic name “Airstrip One.”  Airstrip One is ruled by The Party under the ideology Insoc, a shortening of “English socialism.”  The Party’s leader, Big Brother, is the object of an intense cult of personality — even though there is no hard proof he actually exists.  Surveillance through two-way telescreens and propaganda are omnipresent.  The protagonist, Winston Smith, is a diligent lower-level Party member who works at the Ministry of Truth, where he rewrites historical records to conform to the state’s ever-changing version of history.  Smith enters into a forbidden relationship with his co-worker, Julia, a relationship that terminates in mutual betrayal.

In his intriguing study, The Ministry of Truth: The Biography of George Orwell’s 1984, British journalist and music critic Dorian Lynskey seeks to explain what Nineteen Eighty-Four “actually is, how it came to be written, and how it has shaped the world, in its author’s absence, over the past seventy years” (p.xiv). Although there are biographies of Orwell and academic studies of Nineteen Eighty-Four’s intellectual context, Lynskey contends that his is the first to “merge the two streams into one narrative, while also exploring the book’s afterlife” (p.xv; I reviewed Thomas Ricks’ book on Orwell and Winston Churchill here in November 2017).   Lynskey’s work is organized in a “Before/After” format.  Part One, about 2/3 of the book, looks at the works and thinkers who influenced Orwell and his novel, juxtaposed with basic Orwell biographical background.  Part II, roughly the last third, examines the novel’s afterlife.

But Lynskey begins in a surprising place, Washington, D.C., in January 2017, where a spokesman for President Donald Trump told the White House press corps that the recently-elected president had taken his oath of office before the “largest audience to ever witness an inauguration – period – both in person and around the globe.”  A presidential adviser subsequently justified this “preposterous lie” by characterizing the statement as “alternative facts” (p.xiii).   Sales of Orwell’s book shot up immediately thereafter.  The incident constitutes a reminder, Lynskey contends, of the “painful lessons that the world appears to have unlearned since Orwell’s lifetime, especially those concerning the fragility of truth in the face of power” (p.xix).

How Orwell came to see the consequences of mutilating truth and gave them expression in Nineteen Eighty-Four is the focus of Part I.  Orwell’s brief participation in the Spanish Civil War, from December 1936 through mid-1937, was paramount among his personal experiences in shaping the novel’s worldview. Spain was the “great rupture in his life; his zero hour” (p.4), the experience that lead Orwell to the conclusion that Soviet communism was as antithetical as fascism and Nazism to the values he held dear (Lynskey’s list of Orwell’s values: “honesty, decency, fairness, memory, history, clarity, privacy, common sense, sanity, England, and love” (p.xv)).  While no single work provided an intellectual foundation for Nineteen Eighty Four in the way that the Spanish Civil War provided the personal and practical foundation, Lynskey discusses numerous writers whose works contributed to the worldview on display in Orwell’s novel.

Lynskey dives deeply into the novels and writings of Edward Bellamy, H.G. Wells and the Russian writer Yevgeny Zamytin.  Orwell’s friend Arthur Koestler set out what Lynskey terms the “mental landscape” for Nineteen Eighty-Four in his 1940 classic Darkness at Noon, while the American conservative James Burnham provided the novel’s “geo-political superstructure” (p.126).  Lynskey discusses a host of other writers whose works in one way or another contributed to Nineteen Eighty-Four’s world view, among them Jack London, Aldous Huxley, Friedrich Hayek, and the late 17th and early 18th century satirist Jonathan Swift.

In Part II, Lynskey treats some of the dystopian novels and novelists that have appeared since Nineteen Eighty-Four.  He provides surprising detail on David Bowie, who alluded to Orwell in his songs and wrote material that reflected the outlook of Nineteen Eighty-Four.  He notes that Margaret Atwood termed her celebrated The Handmaid’s Tale a “speculative fiction of the George Orwell variety” (p.241).  But the crux of Part II lies in Lynskey’s discussion of the evolving interpretations of the novel since its publication, and why it still matters today.  He argues that Nineteen Eighty Four has become both a “vessel into which anyone could pour their own version of the future” (p.228), and an “all-purpose shorthand” for an “uncertain present” (p.213).

In the immediate aftermath of its publication, when the Cold War was at its height, the novel was seen by many as a lesson on totalitarianism and the dangers that the Soviet Union and Communist China posed to the West (Eurasia, Eastasia and Oceania in the novel correspond roughly to the Soviet Union, China and the West, respectively).  When the Cold War ended with the fall of Soviet Union in 1991, the novel morphed into a warning about the invasive technologies spawned by the Internet and their potential for surveillance of individual lives.  In the Age of Trump and Brexit, the novel has become “most of all a defense of truth . . . Orwell’s fear that ‘the very concept of objective truth is fading out of the world’ is the dark heart of Nineteen Eighty-Four. It gripped him long before he came up with Big Brother, Oceania, Newspeak or the telescreen, and it’s more important than any of them” (p.265-66).

* * *

                            Orwell was born as Eric Blair in 1903 in India, where his father was a mid-level civil servant. His mother was half-French and a committed suffragette.  In 1933, prior to publication of his first major book,  Down and Out in Paris and London, which recounts his life in voluntary poverty in the two cities, the fledgling author took the pen name Orwell from a river in Sussex .  He changed names purportedly to save his parents from the embarrassment which  he assumed his forthcoming work  would cause.  He was at best a mid-level journalist and writer when he went to Spain in late 1936, with a handful of novels and lengthy essays to his credit – “barely George Orwell” (p.4), as Lynskey puts it.

The Spanish Civil war erupted after Spain’s Republican government, known as the Popular Front, a coalition of liberal democrats, socialists and communists, narrowly won a parliamentary majority in 1936, only to face a rebellion from the Nationalist forces of General Francisco Franco, representing Spain’s military, business elites, large landowners and the Catholic Church.  Nazi Germany and Fascist Italy furnished arms and other assistance for the Nationalists’ assault on Spain’s democratic institutions, while the Soviet Union assisted the Republicans (the leading democracies of the period, Great Britain, France and the United States, remained officially neutral; I reviewed Adam Hochschild’s work on the Spanish Civil War here in August 2017).   Spain provided Orwell with his first and only personal exposure to the “nightmare atmosphere” (p.17) that would envelop the novel he wrote a decade later.

Fighting with the Workers’ Party of Marxist Unification (Spanish acronym: POUM), a renegade working class party that opposed Stalin, Orwell quickly found himself in the middle of what amounted to a mini-civil war among the disparate left-wing factions on the Republican side, all within the larger civil war with the Nationalists.  Orwell saw first-hand the dogmatism and authoritarianism of the Stalinist left at work in Spain, nurtured by a level of deliberate deceit that appalled him.  He read newspaper accounts that did not even purport to bear any relationship to what had actually happened. For Orwell previously, Lynskey writes:

people were guilty of deliberate deceit or unconscious bias, but at least they believed in the existence of facts and the distinction between true and false. Totalitarian regimes, however, lied on such a grand scale that they made Orwell feel that ‘the very concept of objective truth is fading out of the world’ (p.99).

Orwell saw totalitarianism in all its manifestations as dangerous not primarily because of secret police or constant surveillance but because “there is no solid ground from which to mount a rebellion –no corner of the mind that has not been infected and warped by the state.  It is power that removes the possibility of challenging power” (p.99).

Orwell narrowly escaped death when he was hit by a bullet in the spring of 1937.  He was hospitalized in Barcelona for three weeks, after which he and his wife Eileen escaped across the border to France.  Driven to Spain by his hatred of fascism, Orwell left with a “second enemy. The fascists had behaved just as appallingly as he had expected they would, but the ruthlessness and dishonesty of the communists had shocked him” (p.18).  From that point onward, Orwell criticized communism more energetically than fascism because he had seen communism “up close, and because its appeal was more treacherous. Both ideologies reached the same totalitarian destination but communism began with nobler aims and therefore required more lies to sustain it” (p.22).   After his time in Spain, Orwell knew that he stood against totalitarianism of all stripes, and for democratic socialism as its counterpoint.

The term “dystopia” was not used frequently in Orwell’s time, and Orwell distinguished between “favorable” and “pessimistic” utopias.   Orwell developed what he termed a “pitying fondness” (p.38) for nineteenth-century visions of a better world, particularly the American Edward Bellamy’s 1888 novel Looking Backward.  This highly popular novel contained a “seductive political argument” (p.33) for the nationalization of all industry, and the use of an “industrial army” to organize production and distribution.  Bellamy had what Lynskey terms a “thoroughly pre-totalitarian mind,” with an “unwavering faith in human nature and common sense” that failed to see the “dystopian implications of unanimous obedience to a one-party state that will last forever” (p.38).

Bellamy was a direct inspiration for the works of H.G. Wells, one of the most prolific writers of his age. Wells exerted enormous influence on the young Eric Blair, looming over the boy’s childhood “like a planet – awe inspiring, oppressive, impossible to ignore – and Orwell never got over it” (p.60).  Often called the English Jules Verne, Wells foresaw space travel, tanks, electric trains, wind and water power, identity cards, poison gas, the Channel tunnel and atom bombs.  His fiction imagined time travel, Martian invasions, invisibility and genetic engineering.  The word Wellsian came to mean “belief in an orderly scientific utopia,” but his early works are “cautionary tales of progress thwarted, science abused and complacency punished” (p.63).

Wells was himself a direct influence upon Yevgeny Zamatin’s We which, in Lymskey’s interpretation, constitutes the most direct antecedent to Nineteen Eighty-Four.  Finished in 1920 at the height of the civil war that followed the 1917 Bolshevik Revolution (but not published in the Soviet Union until 1988), We is set in the undefined future, a time when people are referred to only by numbers. The protagonist, D-503, a spacecraft engineer, lives in the One State, where mass surveillance is omnipresent and all aspects of life are scientifically managed.  It is an open question whether We was intended to satirize the Bolshevik regime, in 1920 already a one-party state with extensive secret police.

Zamyatin died in exile in Paris in 1937, at age 53.   Orwell did not read We until sometime after its author’s death.  Whether Orwell “took ideas straight from Zamyatin or was simply thinking along similar lines” is “difficult to say” (p.108), Lynskey writes.  Nonetheless, it is “impossible to read Zamyatin’s bizarre and visionary novel without being strongly reminded of stories that were written afterwards, Orwell’s included” (p.102).

Koestler’s Darkness at Noon offered a solution to the central riddle of the Moscow show trials of the 1930s: “why did so many Communist party members sign confessions of crimes against the state, and thus their death warrants?” Koestler argued that their “years of unbending loyalty had dissolved their belief in objective truth: if the Party required them to be guilty, then guilty they must be” (p.127).  To Orwell this meant that one is punished in totalitarian states not for “ what one does but for what one is, or more exactly, for what one is suspected of being” (p.128).

The ideas contained in James Burnham’s 1944 book, The Managerial Revolution “seized Orwell’s imagination even as his intellect rejected them” (p.122).  A Trotskyite in his youth who in the 1950s helped William F. Buckley found the conservative weekly, The National Review, Burnham saw the future belonging to a huge, centralized bureaucratic state run by a class of managers and technocrats.  Orwell made a “crucial connection between Burnham’s super-state hypothesis and his own long-standing obsession with organized lying” (p.121-22).

Orwell’s chronic lung problems precluded him from serving in the military during World War II.  From August 1941 to November 1943, he worked for the Indian Section of the BBC’s Eastern Service, where he found himself “reluctantly writing for the state . . . Day to day, the job introduced him to the mechanics of propaganda, bureaucracy, censorship and mass media, informing Winston Smith’s job at the Ministry of Truth” (p.83; Orwell’s boss at the BBC was notorious Cambridge spy Guy Burgess, whose biography I reviewed here in December 2017).   Orwell left the BBC in 1943 to become literary editor of the Tribune, an anti-Stalinist weekly.

While at the Tribune, Orwell found time to produce Animal Farm, a “scrupulous allegory of Russian history from the revolution to the Tehran conference” (p.138), with each animal representing an individual, Stalin, Trotsky, Hitler, and so on.  Animal Farm shared with Nineteen Eighty-Four an “obsession with the erosion and corruption of memory” (p.139).  Memories in the two works are gradually erased, first, by the falsification of evidence; second, by the infallibility of the leader; third, by language; and fourth, by time.  Published in August 1945, Animal Farm quickly became a best seller.  The fable’s unmistakable anti-Soviet message forced Orwell to remind readers that he remained a socialist.  “I belong to the Left and must work inside it,” he wrote, “much as I hate Russian totalitarianism and its poisonous influence of this country” (p.141).

Earlier in 1945, Orwell’s wife Eileen died suddenly after being hospitalized for a hysterectomy, less than a year after the couple had adopted a son, whom they named Richard Horatio Blair.  Orwell grieved the loss of his wife by burying himself in the work that culminated in Nineteen Eighty-Four.   But Orwell became ever sicker with tuberculosis as he worked  over the next four years on the novel which was titled The Last Man in Europe until almost immediately prior to publication (Lynskey gives no credence to the theory that Orwell selected 1984 as a inversion of the last two digits of 1948).

Yet, Lynskey rejects the notion that Nineteen Eighty-Four was the “anguished last testament of a dying man” (p.160).  Orwell “never really believed he was dying, or at least no more than usual. He had suffered from lung problems since childhood and had been ill, off and on, for so long that he had no reason to think that this time would be the last ” (p.160).  His novel was published in June 1949.  227 days later, in January 1950, Orwell died when a blood vessel in his lung ruptured.

* * *

                                    Nineteen Eighty-Four had an immediate positive reception. The book was variously compared to an earthquake, a bundle of dynamite, and the label on a bottle of poison.  It was made into a movie, a play, and a BBC television series.  Yet, Lynskey writes, “people seemed determined to misunderstand it” (p.170).  During the Cold War of the early 1950s, conservatives and hard line leftists both saw the book as a condemnation of socialism in all its forms.  The more astute critics, Lynskey argues, were those who “understood Orwell’s message that the germs of totalitarianism existed in Us as well as Them” (p.182).  The Soviet invasion of Hungary in 1956 constituted a turning point in interpretations of Nineteen Eighty-Four.  After the invasion, many of Orwell’s critics on the left “had to accept that they had been wrong about the nature of Soviet communism and that he [Orwell] had been infuriatingly right” (p.210).

The hoopla that accompanied the actual year 1984, Lynskey notes wryly, came about only because “one man decided, late in the day, to change the title of his novel” (p.234).   By that time, the book was being read less as an anti-communist tract and more as a reminder of the abuses exposed in the Watergate affair of the previous decade, the excesses of the FBI and CIA, and the potential for mischief that personal computers, then in their infancy, posed.  With the fall of the Berlin wall and the end of communism between 1989 and 1991, focus on the power of technology intensified.

But today the focus is on Orwell’s depiction of the demise of objective truth in Nineteen Eighty-Four, and appropriately so, Lynskey argues, noting how President Trump masterfully “creates his own reality and measures his power by the number of people who subscribe to it: the cruder the lie, the more power its success demonstrates” (p.264).  It is truly Orwellian, Lynskey contends, that the phrase “fake news” has been “turned on its head by Trump and his fellow authoritarians to describe real news that is not to their liking, while flagrant lies become ‘alternative facts’” (p.264).

* * *

                                 While resisting the temptation to term Nineteen Eighty-Four more relevant now than ever, Lynskey asserts that the novel today is nonetheless  “a damn sight more relevant than it should be” (p.xix).   An era “plagued by far-right populism, authoritarian nationalism, rampant disinformation and waning faith in liberal democracy,” he concludes, is “not one in which the message of Nineteen Eighty-Four can be easily dismissed” (p.265).

Thomas H. Peebles

La Châtaigneraie, France

February 25, 2020

2 Comments

Filed under Biography, British History, European History, Language, Literature, Political Theory, Politics, Soviet Union