Category Archives: Politics

Criticizing Government Was What They Knew How To Do

 

Paul Sabin, Public Citizen:

The Attack on Big Government and the Remaking of American Liberalism

(W.W. Norton & Co., 2021)

1965 marked the highpoint for Democratic President Lyndon Johnson’s Great Society program, an ambitious set of policy and legislative initiatives which envisioned using the machinery of the federal government to alleviate poverty, combat racial injustice and address other pressing national needs.  Johnson was coming off a landslide victory in the November 1964 presidential election, having carried 44 states and the District of Columbia with the highest percentage of the popular vote of any presidential candidate in over a century.  Yet a decade and a half later, in January 1981, Republican Ronald Reagan, after soundly defeating Democratic incumbent Jimmy Carter,  took the presidential oath of office declaring “government is not the solution, it is the problem.”

How did government in the United States go in a fifteen-year period from being the solution to society’s ills to the cause of its problems?  How, for that matter, did the Democratic Party go from dominating the national political debate up through the mid-1960s to surrendering the White House to a former actor who had been considered too extreme to be a viable presidential candidate?  These are questions Yale University professor Paul Sabin poses at the outset of his absorbing Public Citizens: The Attack on Big Government and the Remaking of American Liberalism.  Focusing on the fifteen-year period 1965-1980, Sabin proffers answers centered on Ralph Nader and the “public interest” movement which Nader spawned.

1965 was also the year Nader rocketed to national prominence with his assault on automobile safety, Unsafe at Any Speed.  General Motors notoriously assisted Nader in his rise by conducting a concerted campaign to harass the previously obscure author.  From there, Nader and the lawyers and activists in his movement – often called “Nader’s Raiders” — turned to such matters as environmentalism, consumer safety and consumer rights, arguing that the government agencies charged with regulating these matters invariably came to be captured by the very industries they were designed to regulate, without the voice of the consumer or end user being heard.  “Why has business been able to boss around the umpire” (p.86) was one of Nader’s favorite rhetorical questions.

Because of both industry influence and bureaucratic ineffectiveness, government regulatory authority operated in the public interest only when pushed and prodded from the outside, Nader reasoned.  In Nader’s world, moreover, the Democratic and Republican parties were two sides of the same corrupt coin, indistinguishable in the degree to which they were both beholden to special corporate interests — “Tweddle Dee and Tweddle Dum,” as he liked to put it.

Reagan viewed government regulation from an altogether different angle.  Whereas Nader believed that government, through effective regulation of the private sector, could help make consumer goods safer, and air and water cleaner, Reagan sought to liberate the private sector from regulation.  He championed a market-oriented capitalism designed to “undermine, rather than invigorate, federal oversight” (p.167).  Yet, Sabin’s broadest argument is that Nader’s insistence over the course of a decade and a half that federal agencies used their powers for “nefarious and destructive purposes” (p.167) — — the “attack on big government” portion of his  title – rendered plausible Reagan’s superficially similar attack.

The “remaking of American liberalism” portion of Sabin’s sub-title might have better been termed “unmaking,” specifically the unmaking of the political liberalism rooted in Franklin Roosevelt’s New Deal – the liberalism which Johnson sought to emulate and build upon in his Great Society, based on a strong and active federal government. Following in the New Deal tradition, Roosevelt’s Democratic party controlled the White House for all but eight years between 1933 and 1969.  Yet, when Reagan assumed the presidency in 1981, New Deal liberalism had clearly surrendered its claim to national dominance.

Most interpretations of how and why New Deal liberalism lost its clout are rooted in the 1960s, with the decade’s anti-Vietnam war and Civil Rights movements as the principal actors.  The Vietnam war separated older blue-collar Democrats, who often saw the war in the same patriotic terms as World War II, from a younger generation of anti-war activists who perceived no genuine US interests in the conflict and no meaningful difference in defense and foreign policy between Democrats and Republicans.  The Civil Rights movement witnessed the defection of millions of white Democrats, unenthusiastic about the party’s endorsement of full equality for African Americans, to the Republican Party.

Nader and the young activists following him were also “radicalized by the historical events of the 1960s, particularly the civil rights movement and the Vietnam War” (p. p.48), Sabin writes.  These were their “defining issues,” shaping “their view of the government and their ambitions for their own lives” (p.51).   We cannot imagine Nader’s movement “emerging in the form that it did separate from civil rights and the war” (p.48).  But by elaborating upon the role of the public interest movement in the breakdown of New Deal liberalism and giving more attention to the 1970s, Sabin adds nuance to conventional interpretations of that breakdown.

The enigmatic Nader is the central figure in Sabin’s narrative.  Much of the book analyzes how Nader and his public interest movement interacted with the administrations of Lyndon Johnson, Richard Nixon, Gerald Ford, and Jimmy Carter, along with brief treatment of the Reagan presidency and that of Bill Clinton.  The Carter years, 1977-1981, revealed the public interest movement’s most glaring weakness: its “inability to come to terms with the compromises inherent in running the executive branch” (p.142), as Sabin artfully puts it.

Carter was elected in 1976, when the stain of the Watergate affair and the 1974 resignation of Richard Nixon hovered over American politics, with trust in government at a low point.  Carter believed in making government regulation more efficient and effective, which he saw as a means of rebuilding public trust.   Yet, he failed to craft what Sabin terms a “new liberalism” that could “champion federal action while also recognizing government’s flaws and limitations” (p.156).

That failure was due in no small measure to frequent and harsh criticism emanating from public interest advocates, whose critique of the Carter administration, Sabin writes, “held those in power up against a model of what they might be, rather than what the push and pull of political compromise and struggle allowed” (p.160).  Criticizing government power was “what they knew how to do, and it was the role that they had defined for themselves”  (p.156). Metaphorically, it was “as if liberals took a bicycle apart to fix it but never quite figured out how to get it running again” (p.xvii).

 * * *

Sabin starts by laying out the general parameters of New Deal liberalism: a technocratic faith that newly created administrative agencies and the bureaucrats leading them would act in the public interest by serving as a counterpoint to the power of private, especially corporate, interests.  By the mid-1950s, the liberal New Deal conception of “managed capitalism” had evolved into a model based on what prominent economist John Kenneth Galbraith termed “countervailing powers,” in which large corporations, held in balance by the federal regulatory state, “would check each other’s excesses through competition, and powerful unions would represent the interests of workers.  Government would play a crucial role, ensuring that the system did not tilt too far in one direction or the other” (p.7-8).

Nader’s public interest movement was built around a rejection of Galbraith’s countervailing power model.  The model failed to account for the interests of consumers and end users, as the economist himself admitted later in his career.  If there was to be a countervailing power, Nader theorized, it would have to come through the creation of “independent, nonbureaucratic, citizen-led organizations that existed somewhat outside the traditional American power structure” (p.59).  Only such organizations provided the means to keep power “insecure” (p.59), as Nader liked to say.

Nader’s vision could be described broadly as “ensuring safety in every setting where Americans might find themselves: workplace, home, doctor’s office, highway, or just outside, breathing the air”  (p.36).  In a 1969 essay in the Nation, Nader termed car crashes, workplace accidents, and diseases the “primary forms of violence that threatened Americans” (p.75), far exceeding street crime and urban unrest.  For Nader, environmental and consumer threats revealed the “pervasive failures and corruption of American industry and government” (p.76).

Nader was no collectivist, neither a socialist nor a New Dealer.  He emphasized open and competitive markets, small private businesses, and especially an activated citizenry — the “public citizens” of his title.  More than any peer, Nader sought to “create institutions that would mobilize and nurture other citizen activists” (p.35).  To that end, Nader founded dozens of public interest organizations, which were able to attract idealistic young people — lawyers, engineers, scientists, and others, overwhelmingly white, largely male — to dedicate their early careers to opposing the “powerful alliance between business and government” (p.24).

Nader envisioned citizen-led public interest organizations serving as a counterbalance not only to business and government but also to labor.  Although Nader believed in the power of unions to represent workers, he was “deeply skeptical that union leaders would be reliable agents for progressive reform”  (p.59).  Union bosses in Nader’s view “too often positioned themselves as partners with industry and government, striking bargains that yielded economic growth, higher wages, and unions jobs at the expense of the health and well-being of workers, communities, and the environment” (p.59).   Nader therefore “forcefully attacked the unions for not doing enough to protect worker safety and health or to allow worker participation in governance” (p.64).

Nader‘s Unsafe at Any Speed was modeled after Rachel Carson’s groundbreaking environmental tract Silent Spring, to the point that it was termed the “Silent Spring of traffic safety”  (p.23).  Nader’s auto safety advocacy, Sabin writes, emerged from “some of the same wellsprings as the environmental movement, part of an increasingly shared postwar concern about the harmful and insidious impacts of new technologies and processes” (p.23).  In 1966, a year after publication of Unsafe at Any Speed. Congress passed two landmark pieces of legislation, the Traffic Safety Act and the Highway Safety Act, which forced manufacturers to design safer cars and pressed states to carry out highway safety programs.  Nader then branched out beyond auto safety to tackle issues like meat inspection, natural-gas pipelines, and radiation safety.

Paradoxically, the Nixon years were among the most fruitful for Nader and the public interest movement.  Ostensibly pro-business and friendly with blue-collar Democrats, Nixon presided over a breathtaking expansion of federal regulatory authority until his presidency was pretermitted by the Watergate affair.  The Environmental Protection Agency was created in 1970, consolidating several smaller federal units.  New legislation which Nixon signed regulated air and water pollution, energy production, endangered species, toxic substances, and land use — “virtually every sector of the US economy” (p.114), Sabin writes.

The key characteristics of Nader-influenced legislation were deadlines and detailed mandates, along with authority for citizen suits and judicial review, a clear break from earlier regulatory strategies.  The tough legislation signaled a “profound and pervasive distrust of government even as it expanded federal regulatory powers” (p.82).   Nader and the public interest movement went after Democrats in Congress with a fervor at least equal to that with which they attacked Republican-led regulatory agencies.  Nader believed that “you didn’t attack your enemy if you wanted to accomplish something, you attacked your friend”  (p.82).

In the early 1970s, the public interest movement targeted Democratic Maine Senator Edmund Muskie, the party’s nominee for Vice-President in 1968, whose support for the environmental movement had earned him the moniker “Mr. Pollution Control.” Declaring his environmental halo unwarranted, the movement sought to take down a man who clearly wanted to ride the environmental issue to the White House.  Nader’s group also went after long-time liberal Democrat Jennings Randolph of West Virginia over coal-mining health and safety regulations.  The adversarial posture toward everyone in power, Democrat as well as Republican, continued into the short interim administration of Gerald Ford, who assumed the presidency in the wake of the Watergate scandal.  And it continued unabated during the administration of Jimmy Carter.

As the Democratic nominee for president, Carter had conferred with Nader during the 1976 campaign and thought he had the support of the public interest movement when he entered the White House in January 1977.  Many members of the movement took positions in the new administration, where they could shape the agencies they had been pressuring.  The new president sought to incorporate the public interest movement’s critiques of government into a “positive vision for government reform,” promoting regulatory approaches that “cut cost and red tape without sacrificing legitimate regulatory goals” (p.186).

Hoping to introduce more flexible regulatory strategies that could achieve environmental and health protection goals at lower economic cost, Carter sacrificed valuable political capital by clashing with powerful congressional Democrats over wasteful and environmentally destructive federal projects. Yet, public interest advocates faulted Carter for his purported lack of will more than they credited him for sacrificing his political capital for their causes.  They saw the administration’s questioning of regulatory costs and the redesign of government programs as “simply ways to undermine those agencies.” (p.154).   Their lack of enthusiasm for Carter severely undermined his reelection bid in the 1980 campaign against Ronald Reagan.

Reagan’s victory “definitively marked the end of the New Deal liberal period, during which Americans had optimistically looked to the federal government for solutions” (p.165), Sabin observes.  Reagan and his advisors “vocally rejected, and distanced themselves from, Carter’s nuanced approach to regulation”  (p.172). To his critics, Reagan appeared to be “trying to shut down the government’s regulatory apparatus” (p.173).

But in considering the demise of New Deal liberalism, Sabin persuasively demonstrates that the focus on Reagan overlooks how the post-World War II administrative state “lost its footing during the 1970s” (p.165).    The attack on the New Deal regulatory state that culminated in Reagan’s election, usually attributed to a rising conservative movement, was also “driven by an ascendant liberal public interest movement” (p.166).   Sabin’s bottom line: blaming conservatives alone for the end of the New Deal is “far too simplistic” (p.165).

* * *

Sabin mentions Nader’s 2000 presidential run on the Green Party ticket only at the end and only in passing.  Although the Nader-inspired public interest movement had wound down by then, Nader gained widespread notoriety that year when he gathered about 95,000 votes in Florida, a state which Democratic nominee Al Gore lost officially by 537 votes out of roughly six million cast (with no small amount of assistance from a controversial 5-4 Supreme Court decision).  Nader’s entire career had been a rebellion against the Democratic Party in all its iterations, and his quixotic run in 2000 demonstrated that he had not outgrown that rebellion.  His presidential campaign took his “lifelong criticism of establishment liberalism to its logical extreme” (p.192).

Thomas H. Peebles

Paris, France

May 13, 2022

 

5 Comments

Filed under American Politics, Political Theory, Politics, United States History

Taking Exception To American Foreign Policy

Andrew Bacevich, After the Apocalypse:

America’s Role in a World Transformed (Metropolitan Books 2020)

Andrew Bacevich is one of America’s most relentless and astute critics of United States foreign policy and the role the American military plays in the contemporary world.  Professor Emeritus of History and International Relations at Boston University and presently president of the Quincy Institute for Responsible Statecraft, Bacevich is a graduate of the United States Military Academy who served in the United States Army for over 20 years, including a year in Vietnam.  In his most recent book, After the Apocalypse: America’s Role in a World Transformed, which came out toward the end of 2020, Bacevich makes an impassioned plea for a smaller American military, a demilitarized and more humble US foreign policy, and more realistic assessments of US security and genuine threats to that security, along with greater attention to pressing domestic needs.  Linking these strands is Bacevich’s scathing critique of American exceptionalism, the idea that the United States has a special role to play in maintaining world order and promoting American democratic values beyond its shores.

In February 2022, as I was reading, then writing and thinking about After the Apocalypse, Vladimir Putin continued amassing soldiers on the Ukraine border and threatening war before invading the country on the 24th.  Throughout the month, I found my views of Bacevich’s latest book taking form through the prism of events in Ukraine.   Some of the book’s key points — particularly on NATO, the role of the United States in European defense, and yes, Ukraine – seemed out of sync with my understanding of the facts on the ground and in need of updating. “Timely” did not appear to be the best adjective to apply to After the Apocalypse. 

Bacevich is a difficult thinker to pigeonhole.  While he sometimes describes himself as a conservative,  in After the Apocalypse he speaks the language of those segments of the political left that border on isolationist and recoil at almost all uses of American military force (these are two distinct segments: I find myself dependably in the latter camp but have little affinity with the former).  But Bacevich’s against-the-grain perspective is one that needs to be heard and considered carefully, especially when war’s drumbeat can be heard.

* * *

Bacevich’s recommendations in After the Apocalypse for a decidedly smaller footprint for the United States in its relations with the world include a gradual US withdrawal from NATO, which he considers a Cold War relic, an “exercise in nostalgia, an excuse for pretending that the past is still present” (p.50).  Defending Europe is now “best left to Europeans” (p.50), he argues.   In any reasoned reevaluation of United States foreign policy priorities, moreover, Canada and Mexico should take precedence over European defense.  Threats to Canadian territorial sovereignty as the Artic melts “matter more to the United States than any danger Russia may pose to Ukraine” (p.169).

I pondered that sentence throughout February 2022, wondering whether Bacevich was at that moment as unequivocal about the United States’ lack of any geopolitical interest in Ukraine as he had been when he wrote After the Apocalypse.  Did he still maintain that the Ukraine-Russia conflict should be left to the Europeans to address?  Was it still his view that the United States has no business defending beleaguered and threatened democracies far from its shores?  The answer to both questions appears to be yes.  Bacevich has had much to say about the conflict since mid-February of this year, but I have been unable to ascertain any movement or modification on these and related points.

In an article appearing in the February 16, 2022, edition of The Nation, thus prior to the invasion, Bacevich described the Ukrainian crisis as posing “minimal risk to the West,” given that Ukraine “possesses ample strength to defend itself against Russian aggression.”  Rather than flexing its muscles in faraway places, the United States should be “modeling liberty, democracy, and humane values here at home. The clear imperative of the moment is to get our own house in order” and avoid “[s]tumbling into yet another needless war.”   In a nutshell, this is After the Apocalypse’s broad vision for American foreign policy. 

Almost immediately after the Russian invasion, Bacevich wrote an OpEd for the Boston Globe characterizing the invasion as a “crime” deserving of “widespread condemnation,” but cautioning against a “rush to judgment.”  He argued that the United States had no vital interests in Ukraine, as evidenced by President Biden’s refusal to commit American military forces to the conflict.  But he argued more forcefully that the United States lacked clean hands to condemn the invasion, given its own war of choice in Iraq in 2003 in defiance of international opinion and the “rules-based international order” (Bacevich’s quotation marks).  “[C]coercive regime change undertaken in total disregard of international law has been central to the American playbook in recent decades,” he wrote.  “By casually meddling in Ukrainian politics in recent years,” he added, alluding most likely to the United States’ support for the 2013-14 “Euromaidan protests” which resulted in the ouster of pro-Russian Ukrainian president Viktor Yanukovych, it had “effectively incited Russia to undertake its reckless invasion.”

Bacevich’s article for The Nation also argued that the idea of American exceptionalism was alive and well in Ukraine, driving US policy.  Bacevich defined the idea hyperbolically as the “conviction that in some mystical way God or Providence or History has charged America with the task of guiding humankind to its intended destiny,” with these ramifications:

We Americans—not the Russians and certainly not the Chinese—are the Chosen People.  We—and only we—are called upon to bring about the triumph of liberty, democracy, and humane values (as we define them), while not so incidentally laying claim to more than our fair share of earthly privileges and prerogatives . . . American exceptionalism justifies American global primacy.

Much  of Bacevich’s commentary about the Russian invasion of Ukraine reflects his impatience with short and selected historical memory.  Expansion of NATO into Eastern Europe in the 1990s, Bacevich told Democracy Now in mid-March of this year, “was done in the face of objections by the Russians and now we’re paying the consequences of those objections.”  Russia was then “weak” and “disorganized” and therefore it seemed to be a “low-risk proposition to exploit Russian weakness to advance our objectives.”  While the United States may have been advancing the interests of Eastern European countries who “saw the end of the Cold War as their chance to achieve freedom and prosperity,” American decision-makers after the fall of the Soviet Union nonetheless  “acted impetuously and indeed recklessly and now we’re facing the consequences.”

* * *

“Short and selected historical memory” also captures Bacevich’s objections to the idea of American exceptionalism.  As he articulates throughout After the Apocalypse, the idea constitutes a whitewashed version of history, consisting “almost entirely of selectively remembered events” which come “nowhere near offering a complete and accurate record of the past” (p.13).  Recently-deceased former US Secretary of State Madeline Albright’s 1998 pronouncement that America resorts to military force because it is the “indispensable nation” which “stand[s] tall and see[s] further than other countries into the future” (p.6) may be the most familiar statement of American exceptionalism.  But versions of the idea that the United States has a special role to play in history and in the world have been entertained by foreign policy elites of both parties since at least World War II, with the effect if not intention of ignoring or minimizing the dark side of America’s global involvement.

 The darkest in Bacevich’s view is the 2003 Iraq war, a war of choice for regime change,  based on the false premise that Saddam Hussein maintained weapons of mass destruction.  After the Apocalypse returns repeatedly to the disastrous consequences of the Iraq war, but it is far from the only instance of intervention that fits uncomfortably with the notion of American exceptionalism. Bacevich cites the CIA-led coup overthrowing the democratically elected government of Iran in 1953, the “epic miscalculation” (p.24) of the Bay of Pigs invasion in 1961, and US complicity in the assassination of South Vietnamese president Ngo Dinh Diem in 1963, not to mention the Vietnam war itself.  When commentators or politicians indulge in American exceptionalism, he notes, they invariably overlook these interventions.

A  telling example is an early 2020 article in  Foreign Affairs by then-presidential candidate Joe Biden.  Under the altogether conventional title “Why America Must Lead Again,” Biden contended that the United States had “created the free world” through victories in two World Wars and the fall of the Berlin Wall.  The “triumph of democracy and liberalism over fascism and autocracy,” Biden wrote, “does not just define our past.  It will define our future, as well” (p.16).  Not surprisingly, the article omitted any reference to Biden’s support as chairman of the Senate Foreign Relations Committee for the 2003 invasion of Iraq.

Biden had woven “past, present, and future into a single seamless garment” (p.16), Bacevich contends.  By depicting history as a “story of America rising up to thwart distant threats,” he had regurgitated a narrative to which establishment politicians “still instinctively revert in stump speeches or on patriotic occasions” (p.17) — a narrative that in Bacevich’s view “cannot withstand even minimally critical scrutiny” (p.16).  Redefining the United States’ “role in a world transformed,” to borrow from the book’s subtitle, will remain “all but impossible until Americans themselves abandon the conceit that the United Sates is history’s chosen agent and recognize that the officials who call the shots in Washington are no more able to gauge the destiny of humankind than their counterparts in Berlin or Baku or Beijing” (p.7).

Although history might well mark Putin’s invasion of Ukraine as an apocalyptic event and 2022 as an apocalyptic year, the “apocalypse” of Bacevich’s title refers to the year 2020, when several events brought into plain view the need to rethink American foreign policy.  The inept initial response to the Covid pandemic in the early months of that year highlighted the ever-increasing economic inequalities among Americans.  The killing of George Floyd demonstrated the persistence of stark racial divisions within the country.  And although the book appeared just after the presidential election of 2020, Bacevich would probably have included the assault on the US Capitol in the first week of 2021, rather than the usual transfer of presidential power, among the many policy failures that in his view made the year apocalyptic.  These failures, Bacevich intones:

 ought to have made it clear that a national security paradigm centered on military supremacy, global power projection, decades old formal alliances, and wars that never seemed to end was at best obsolete, if not itself a principal source of self-inflicted wounds.  The costs, approximately a trillion dollars annually, were too high.  The outcomes, ranging from disappointing to abysmal, have come nowhere near to making good on promises issued from the White House, the State Department, or the Pentagon and repeated in the echo chamber of the establishment media (p.3).

In addition to casting doubts on the continued viability of NATO and questioning any US interest in the fate of Ukraine, After the Apocalypse dismisses as a World War II era relic the idea that the United States belongs to a conglomeration of nations known as  “the West,” and that it should lead this conglomerate.  Bacevich advocates putting aside ”any residual nostalgia for a West that exists only in the imagination” (p.52).  The notion collapsed with the American intervention in Iraq, when the United States embraced an approach to statecraft that eschewed diplomacy and relied on the use of armed force, an approach to which Germany and France objected.   By disregarding their objections and invading Iraq, President George W. Bush “put the torch to the idea of transatlantic unity as a foundation of mutual security” (p.46).  Rather than indulging the notion that whoever leads “the West” leads the world, Bacevich contends that the United States would be better served by repositioning itself as a “nation that stands both apart from and alongside other members of a global community” (p.32).

After the apocalypse – that is, after the year 2020 – the repositioning that will redefine America’s role in a world transformed should be undertaken from what Bacevich terms a “posture of sustainable self-sufficiency” as an alternative to the present “failed strategy of military hegemony (p.166).   Sustainable self-sufficiency, he is quick to point out, is not a “euphemism for isolationism” (p.170).  The government of the United States “can and should encourage global trade, investment, travel, scientific collaboration, educational exchanges, and sound environmental practices” (p.170).  In the 21st century, international politics “will – or at least should – center on reducing inequality, curbing the further spread of military fanaticism, and averting a total breakdown of the natural world” (p.51).  But before the United States can lead on these matters, it “should begin by amending its own failings (p.51),” starting with concerted efforts to bridge the racial divide within the United States.

A substantial portion of After the Apocalypse focuses on how racial bias has infected the formulation of United States foreign policy from its earliest years.  Race “subverts America’s self-assigned role of freedom,” Bacevich writes.  “It did so in 1776 and it does so still today” (p.104).  Those who traditionally presided over the formulation of American foreign policy have “understood it to be a white enterprise.”  While non-whites “might be called upon to wage war,” he emphasizes, but “white Americans always directed it” (p.119).  The New York Times’ 1619 Project, which seeks to show the centrality of slavery to the founding and subsequent history of the United States, plainly fascinates Bacevich.  The project in his view serves as an historically based corrective to another form of American exceptionalism, questioning the “very foundation of the nation’s political legitimacy” (p.155).

After the Apocalypse raises many salient points about how American foreign policy interacts with other priorities as varied as economic inequality, climate change, health care, and rebuilding American infrastructure.  But it leaves the impression that America’s relationships with the rest of the world have rested in recent decades almost exclusively on flexing American military muscle – the “failed strategy of militarized hegemony.”  Bacevich says little about what is commonly termed “soft power,” a fluid term that stands in contrast to military power (and in contrast to punitive sanctions of the type being imposed presently on Russia).  Soft power can include such forms of public diplomacy  as cultural and student exchanges, along with technical assistance, all of which   have a strong track record in quietly advancing US interests abroad.

* * *

To date, five full weeks into the Ukrainian crisis, the United States has conspicuously rejected the “failed strategy of militarized hegemony.”  Early in the crisis, well before the February 24th invasion, President Biden took the military option off the table in defending Ukraine.  Although Ukrainians would surely welcome the deployment of direct military assistance on their behalf, as of this writing NATO and the Western powers are fighting back through stringent economic sanctions – diplomacy with a very hard edge – and provision of weaponry to the Ukrainians so they can fight their own battle, in no small measure to avoid a direct nuclear confrontation with the world’s other nuclear superpower.

The notion of “the West” may have seemed amorphous and NATO listless prior to the Russian invasion.  But both appear reinvigorated and uncharacteristically united in their determination to oppose Russian aggression.  The United States, moreover, appears to be leading both, without direct military involvement but far from heavy-handedly, collaborating closely with its European and NATO partners.  Yet, none of Bacevich’s writings on Ukraine hint that the United States might be on a more prudent course this time.

Of course, no one knows how or when the Ukraine crisis will terminate.  We can only speculate on the long-term impact of the crisis on Ukraine and Russia, and on NATO, “the West,” and the United States.  Ukraine 2022 may well figure as a future data point in American exceptionalism, another example of the “triumph of democracy and liberalism over fascism and autocracy,” to borrow from President Biden’s Foreign Affairs article.  But it could also be one of the data points that its proponents choose to overlook.

Thomas H. Peebles

La Châtaigneraie, France

March 30, 2022

 

 

 

11 Comments

Filed under American Politics, American Society, Eastern Europe, Politics

Flawed Ideal

Michael Sandel, The Tyranny of Merit:

What’s Become of the Common Good (Farrar Strauss and Giroux)

 

“Those who work hard and play by the rules should be able to rise as far as their talents will take them.”  This catchphrase, a favorite of politicians of all political stripes, captures in shorthand the American idea of meritocracy. More formally, Merriam-Webster defines meritocracy as a “system, organization, or society in which people are chosen and moved into positions of success, power, and influence on the basis of their demonstrated abilities and merit.”  In a modern democracy, one would be hard pressed to argue against the idea that life’s major opportunities should be open to all who can prove themselves through talent and hard work.

Renowned Harvard professor Michael Sandel is not about to make that argument.  But in The Tyranny of Merit: What’s Become of the Common Good, Sandel nonetheless delivers a searing critique of meritocracy today, primarily in the United States and secondarily in Great Britain.  Sandel, one of America’s best-known philosophers, begins The Tyranny of Merit by acknowledging that as an abstract principle, meritocracy has won the day in the United States, dominating the national debate about such matters as access to jobs, education, and public office.  “Our disagreements are less about the principle itself than about what it requires,” he writes. “When people complain about meritocracy, the complaint is usually not about the ideal but about our failure to live up to it” (p.119).

But in this provocative, against-the-grain work, Sandel asks us to consider the possibility that the real problem is not that we have fallen short in trying to live up to the meritocratic ideal, but that the ideal itself is flawed. Sandel’s argument rests on a straightforward premise: today’s meritocracy stratifies society into winners and losers, defined mostly by economic status and university diplomas, generating hubris among the winners and resentment and humiliation among the losers.

The winners, our elites, “believe they have earned their success through their own talent and hard work” (p.14),  Sandel writes.  They view success not as a matter of luck or grace, but as something earned through effort and striving, making success a “sign of virtue. My affluence is my due” (p.59).  The downside of meritocratic stratification is that those left behind—typically those without a college education—are perceived as being responsible for their fate, with “no one to blame but themselves” (p.14).  The result is that we have lost a shared notion of the common good and with it a sense of the solidarity that might bind us together in all our diversity.

The more we view ourselves as self-made and self-sufficient, Sandel contends, the “less likely we are to care for the fate of those less fortunate than ourselves” (p.59).   Meritocratic hubris “banishes all sense of gift or grace. It diminishes our capacity to see ourselves as sharing a common fate. It leaves little room for solidarity” (p.25).  Sandel links meritocracy’s hard edge to rising economic inequality at home over the past four decades, accentuated by what we term globalization—the form of capitalism associated with freer international trade, increasingly inter-dependent markets and, in the United States, the loss of blue-collar jobs to foreign locations with lower labor costs.

The jump in economic inequality in the United States began around 1980 with the presidency of Ronald Reagan, while globalization took off after the fall of the Soviet Union in 1991.  Today, Sandel points out, the richest one percent in the United States take in more than the combined earnings of the entire bottom half of the population, with median income stagnating for the past forty years. In 1965, according to the Economic Policy Institute, the CEOs of America’s largest public corporations earned about twenty-one times what an average worker in the corporation earned; today, the ratio is 350 : 1.  One of Sandel’s key points is that rising economic inequality, combined with market-driven globalization, contributed to Donald Trump’s electoral victory in 2016 in the United States, to the Brexit vote that same year in the United Kingdom, and to the phenomenon known as populism in both countries and elsewhere around the world.

Sandel characterizes the Trump electoral victory as an “angry verdict on decades of rising inequality and a version of globalization that benefits those at the top but leaves ordinary citizens feeling disempowered” (p.17).   Trump’s victory tapped into a “wellspring of anxieties, frustrations, and legitimate grievances to which the mainstream parties had no compelling answer” (p.17-18).  It was also a rebuke for a “technocratic approach” to politics that is “tone-deaf to the resentments of people who feel the economy and the culture have left them behind” (p.17).

The meritocratic promise, Sandel emphasizes, is not one of greater equality, but of “greater and fairer mobility” (p.85).   Allocating jobs and opportunities according to merit simply “reconfigures inequality to align with ability” (p.117); it does not reduce inequality.  This reconfiguration “creates a presumption that people get what they deserve” (p.117).  To be sure, Sandel sees nothing wrong with hiring and promoting people based on merit. In fact, he writes, it is “the right thing to do,” (p.33), dictated by both efficiency and fairness.

But if we are to overcome the “tyranny of merit,” we need to rethink the way we conceive success, question the meritocratic conceit that those on the top have made it on their own, and challenge the inequalities of wealth and esteem that are “defended in the name of merit but that foster resentment, poison our politics, and drive us apart” (p.155).    To move beyond the “polarized politics of our time,” we must have a “reckoning with merit,” (14), Sandel argues, a reckoning that begins with the two domains of life most central to the meritocratic conception of success, education and work.

* * *

The Tyranny of Merit treats both education and work throughout but builds up to a final chapter on each: “The Sorting Machine,” largely a discussion of the admission process at elite American colleges and universities; and “Recognizing Work,” a plea for restoring a  sense of dignity to the work of those without a college or university degree.  Linking the two is what Sandel terms “credentialism,” the meritocratic insistence that a college degree is the “primary route to a respectable job and a decent life” (p.73).

Credentialism and “disdain for the poorly educated,” (p.95), Sandel suggests, may constitute the last acceptable prejudice in an age when racism and sexism are frowned upon in most circles.  The constant call for working people to improve their condition by getting a college degree, however well intentioned, “eventually valorizes credentialism and undermines social recognition and esteem for those who lack the credentials the system rewards” (p.89).  Building a politics around the idea that a college degree is a prerequisite for dignified work and social esteem, moreover, has a “corrosive effective on democratic life.  It devalues the contributions of those without a diploma, fuels prejudice against less educated members of society … and provokes political backlash” (p.104).

But if success in today’s meritocratic world is measured primarily by education and economic standing, it is unclear how the two fit together, part of a more fundamental question that runs through Sandel’s analysis: just who are meritocracy’s self-satisfied winners? How do we identify them?  Much of The Tyranny of Merit suggests that they are mostly the super-rich, such as Wall Street financiers and high-ranking corporate executives, along with top government officials, such as cabinet officers and leading legislators.  Sandel emphasizes—overemphasizes, in my view—the importance of a degree from an elite college or university, defined as one which admits less than 20% of its applicants.  But what about the Harvard graduate who goes on to be a high school math teacher?  Or the high school dropout who creates a wildly successful construction business and lives at the upper end of the upper middle-class?

 “The Sorting Machine,” Sandel’s chapter on higher education, focuses primarily on the differences in today’s meritocratic society between those credentialed with a college or university degree from an elite college or university, and those with degrees from other educational institutions, including community colleges.  Degrees from elite institutions are perceived all-too-often as the only reliable prerequisites for dignified work and social esteem—a ticket upward for those aspiring to rise on the economic ladder, and an insurance policy for those already there, that they don’t fall down the ladder.  But the majority of students at elite institutions, Sandel notes, still come from wealthy families, due in no small part to the many advantages that well-off parents can provide their children, giving rise to a “pervasive unfairness that prevents higher education from living up to the meritocratic principle it professes” (p.11).  Still, only about 20% of graduating high school seniors get caught up in the frenzied pursuit of admission to elite colleges and universities.

For the remaining 80%, Sandel writes, the “tyranny of merit is not about a soul-killing competition for admission but about a demoralizing world of work that offers meager economic reward and scant social esteem to those who lack meritocratic credentials” (p.188).   He quotes one of his students, a young man from Texas, who opined that one must work hard in high school to “get into a good college and get a good job. If not, you work in the oil fields” (p.77).  Becoming a plumber or electrician or dental hygienist, Sandel argues at another point, should be “respected as a valuable contribution to the common good, not regarded as a consolation prize for those who lack the SAT scores or financial means to make it to the Ivy League” (p.191).  That sentence more than puzzled me.

Had Sandel himself succumbed to the elitist conceit that the pathway to meaningful and important work is open only to graduates of a small sliver of higher education institutions, the very credentialism he seeks to discredit?  Or was he merely expressing the perception of many of his students, like the young man from Texas?  This binary view—the Ivy way or the highway­—may well be how the world looks from places like Harvard, within the belly of the elitist beast, but the real world is awash with leaders, movers, and shakers whose degrees do not come from hypercompetitive, elite American colleges and universities.

I am willing to venture that the president of just about any American college or university considered non-elitist would be delighted to provide the names of “famous” alumni and cite a litany of graduates who have gone on to important positions in the community and elsewhere in the world. As one personal example, while assigned to a United States Embassy in Eastern Europe, I worked under two different US Ambassadors, both extraordinary leaders with multiple talents, each a genuine superstar within the ranks of the US Foreign Service.  The first was a graduate of Arkansas State University, the second from Grand Valley State University in Michigan, neither likely to be on a list of elitist higher education institutions.

Sandel advocates more support, moral as well as financial, for non-elitist higher education institutions.  But his more pressing concern is to restore dignity to those without a college or university degree, a surprising 70% of the adult American population.  His chapter “Recognizing Work” focuses on the role of blue-collar workers in American society, particularly those who voted for Donald Trump in the last two presidential elections—thus mostly white blue-collar workers.

Sandel notes that from the end of World War II to the 1970s, it was possible for those without a college degree to find good work, support a family, and lead comfortable middle-class lives.  Globalization and the loss of well-paying blue-collar jobs have made this far more difficult today. Although overall per capita income in the United States has increased 85% since 1979, white men without four-year college degrees now make less, in real terms, than they did then.  Any serious response to working-class frustrations, Sandel argues, should start with rethinking our notions of the common good as they apply to those without a college degree.

How a society honors and rewards work is “central to the way it defines the common good” (p.205), implicating such questions as what counts as a valuable contribution to the common good and what we really owe to one another.  Today we operate under what Sandel terms a market definition of the common good, where individual preferences and consumer welfare are paramount. If the common good is “simply a matter of satisfying consumer preferences,” Sandel contends, then market wages are a “good measure of who has contributed what. Those who make the most money have presumably made the most valuable contribution to the common good, by producing the goods and services that consumers want”  (p.208).

Sandel seeks to displace the market definition with a civic definition, rooted in the thinking of Aristotle and Hegel, the American republican tradition, and Catholic social thinking.  A civic definition is “inescapably contestable” (p.214), Sandel warns. We may never come to agree on its substantive terms but nonetheless need to engage in a debate over what those terms could include. This will require “reflecting critically on our preferences—ideally, elevating and improving them—so that we can live worthwhile and flourishing lives” (p.208).   Moving the debate about the dignity of work away from the market definition of the common good has the potential to “disrupt our partisan complacencies, morally invigorate our public discourse, and move us beyond the polarized politics that four decades of market faith and meritocratic hubris have bequeathed” (p.214).

Critical reflection on the common good and a renewed debate on the dignity of work are incontestably fine ideas, but is difficult to imagine any wide-scale debate in today’s United States that would take us in the direction of a wholesale change in the prevailing meritocratic ethos.   Yet, several pragmatic steps that could narrow the glaring economic disparities between the very rich and working-class Americans might, in turn, smooth some of the sharper edges of the meritocratic ethos and thereby enhance the dignity of work.

One place to start lies in changing tax policies.  A political agenda that recognizes the dignity of work, Sandel argues, would “use the tax system to reconfigure the economy of esteem by discouraging speculation and honoring productive labor” (p.218).  A consumption or “VAT” tax would be a modest step in this direction, along with a “financial transactions tax on high-frequency trading, which contributes little to the real economy” (p.219).  A more progressive income tax with higher rates on the highest brackets—top tax rates in the 1950s reached 91%—would also help narrow economic disparities, as would higher estate taxes, which today exempt all estate wealth up to about $12 million.  Then there is my favorite: enhanced funding for the IRS to equip the agency to better pursue high level tax fraud and avoidance.

Narrowing the economic gap can also be accomplished from below by more generous social welfare benefits, not unlike those contained in President Biden’s proposed Build Back Better Act: universal and free childcare, affordable health insurance, and extending the Child Tax credit and Earned Income Tax credit.  More job retraining programs need to be established for workers whose jobs move overseas and higher education—at both elite and non-elite institutions—needs to be made more accessible for young people from lower income families (to include pathways to relief for student debt).  Sandel mentions each briefly.  Surprisingly, he doesn’t give much attention to the potential of a reinvigorated organized labor movement to diminish some of the most glaring economic disparities in American society, which could in turn provide a tangible statement of the dignity and value of work.  The term solidarity, after all, is closely associated with the American labor movement.

* * *

Sandel’s trenchant critique of the meritocratic ethos in today’s United States leads — inescapably in my mind — to the conclusion that changing that ethos starts with narrowing the space between those at the top of the economic ladder and the ladder’s bottom half.  Until then, The Tyranny of Merit’s eloquently argued case for a more humane version of the common good could be scintillating subject matter for a (Sandel-led) philosophy seminar at Harvard, but with little likelihood of gaining traction in the world beyond.

Thomas H. Peebles

La Châtaigneraie, France

March 23, 2022

 

 

 

 

8 Comments

Filed under American Politics, Politics

The Authoritarian Playbook for Uprooting Democracy

 

Ruth Ben-Ghiat, Strongmen: Mussolini to the Present (Norton, 2020)

In late November of this year, the Stockholm-based International Institute for Democracy and Electoral Assistance (IDEA) issued its annual report showing democratic slippage and authoritarian ascendancy throughout the world, with the United States included among the world’s backsliding democracies.  The report’s ominous conclusion was that the number of countries moving in the direction of authoritarianism is three times the number moving toward democracy.  Less than a month later, US President Joe Biden opened a “Summit for Democracy,” in Washington, D.C., attended virtually by representatives of more than 100 countries, along with civil society activists, business leaders and journalists.  Alluding to but not dwelling upon the increasing threats to democracy that the United States faces internally, Biden described the task of strengthening democracy to counter authoritarianism as the “defining challenge of our time.”

The short period between the IDEA report and the democracy summit coincided with the time I was grappling with Ruth Ben-Ghiat’s Strongmen: Mussolini to the Present, a work that provides useful but hardly reassuring background on today’s authoritarian ascendancy.  As her title suggests, Ben-Ghiat finds the origins of the 21st century version of authoritarianism in the Fascist regime of Benito Mussolini, appointed in 1922 by King Victor Emmanuel II to head the Italian government as Prime Minister, an appointment that marked the end of Italy’s liberal democratic parliamentary regime.

Ben-Ghiat defines authoritarianism as a political system in which executive power is concentrated in a single individual and predominates “at the expense of the legislative and judicial branches of government” (p.5), with the single individual claiming that he and his agents are “above the law, above judgment, and not beholden to the truth” (p.253).    A professor of history and Italian Studies at New York University and a leading academic expert on Mussolini and modern Italian history, Ben-Ghiat uses her knowledge of the man called Il Duce and his Fascist party’s rule in  Italy from 1922 to 1943 as a starting point to build a more comprehensive picture of leaders who have followed in Mussolini’s footsteps – the “strongmen”  of her title, or “authoritarians,” two terms she uses interchangeably.

Ben-Ghiat divides modern authoritarian rule since Mussolini’s time into three general historical periods: 1) the fascist era of Mussolini and his German ally, Adolph Hitler, 1919-1945;  2) the age of military coups, 1945 to 1990; and 3) what she terms the new authoritarian age, 1990 to the present.  But  Strongmen is not an historical work, arranged in chronological order. Ben-Ghiat focuses  instead on the tools and tactics selected strongmen have used since Mussolini’s time.

In ten chapters, divided into three general sections, “Getting to Power,” “Tools of Rule,” and “Losing Power,” Ben-Ghiat  elaborates respectively upon how strongmen have obtained, maintained, and lost power.  Each chapter sets forth general principles of strongman rule, to which she adds illustrative examples of how specific strongmen have adhered to the principles.   For Ben-Ghiat, the key tools in the strongman’s toolbox are propaganda, violence, corruption and, most originally, virility.  Each is the subject of a separate chapter, but they are “interlinked” (p.7) and each is referred to throughout the book.

Ben-Ghiat’s cast of characters changes from one chapter to the next, depending upon its subject matter.  At the outset, she lists 17 “protagonists,” authoritarian leaders who are mentioned at least occasionally throughout the book, including such familiar contemporary leaders as Hungary’s Viktor Orbán, Turkey’s Recep Erdogan, and Brazil’s Jair Bolsonaro.  But eight dominate her narrative: Mussolini and  Hitler, who personified the Fascist era, with Mussolini making an appearance in nearly every chapter; Spain’s General Francisco Franco, a transition figure from fascism to military coup, a fascist in the 1930s and a pro-American client during the Cold War;  Chile’s Augusto Pinochet, who modeled himself after Franco and embodied the era of military coups; and four “modern” authoritarians, Italy’s Silvio Berlusconi, who served as Italy’s Prime Minister in three governments, from 1994 to 1995, 2001 to 2006, and 2008 to 2011; Russia’s Vladimir Putin, who followed Boris Yeltsin’s chaotic attempt in the 1990s to establish neoliberal democratic institutions after the fall of the Soviet Union; Libya’s Muamar Gaddafi, more a transition figure from the age of military coups to 21st century authoritarianism; and yes, America’s Donald Trump.  Reminding readers how closely Trump and his administration adhered to the authoritarian playbook appears to be one of the book’s main if unstated purposes.

Among the eight featured authoritarian leaders, all but Gaddafi rose to power in systems that were in varying degrees democratic.  How authoritarians manage to weaken democracy, often using democratic means, is the necessary backdrop to Ben-Ghiat’s examination of the strongman’s playbook.   All eight of her featured leaders sought in one way or another to undermine existing democratic norms and institutions.  Ben-Ghiat excludes strong women political leaders, such as Indira Gandhi and Margaret Thatcher, for this very reason.  No woman leader has yet “sought to destroy democracy” (p.5), she argues, although she does not rule out the possibility that a future female leader could meet the authoritarian criteria.

Among the featured eight, moreover, only Gaddafi could be considered left of center on the political spectrum.  The other seven fit comfortably on the right side.  While there would be plenty of potential subjects to choose from for an examination of strongmen of the left – Joseph Stalin, Mao Zedong and Fidel Castro all come readily to mind – Strongmen is largely an analysis of right-wing authoritarianism.  For Ben-Ghiat, as for President Biden, combatting this form of authoritarianism constitutes “one of the most pressing matters of our time” (p.4).

* * *

From Mussolini and Hitler to Berlusconi and Trump, the strongman’s rule has been almost by definition highly personal.  Strongmen, Ben-Ghiat argues, do not distinguish between their individual agendas and those of the nation they rule.  They have proven particularly adept at appealing to negative emotions and powerful resentments.  They rise to power in moments of uncertainty and transition, generating support when society is polarized, or divided into two opposing ideological camps, which is “why they do all they can to exacerbate strife”  (p.8).

A strongman’s promise to return his nation to greatness constitutes the “glue” (p.66) of modern authoritarian rule, Ben-Ghiat argues.  The promise typically combines a sense of nostalgia and the fantasy of returning to an imagined earlier era with a bleak view of the present and a glowing vision of the future.  In the chaos of post-World War I Italy, Mussolini invoked the lost imperial grandeur of the Roman Empire.  Putin speaks nostalgically of the Soviet era.  Trump’s 2017 inaugural address cast the United States as a desolate place of “rusted-out factories scattered like tombstones across the landscape of our nation” (p.58), the dystopian picture of contemporary America which underpinned his ubiquitous slogan Make America Great Again.

Franco and Pinochet were typical of right-wing authoritarians who organized the path to the glorious future around counterrevolutionary crusades against perceived leftist subversives.  But in what are sometimes termed “developing” or “Third World” countries,” the return to national greatness focuses more frequently upon the remnants of foreign occupation.  Rather than leading a revolt against pre-existing democratic institutions and norms, anti-imperialist leaders like Gaddafi use their peoples’ “anger over the tyranny of Western colonizers to rally followers,” while adapting “traditions of colonial violence for their own purposes” (p.36), Ben-Ghiat writes.

To gain and maintain power, strongmen utilize a style of propaganda which Ben-Ghiat describes as a “set of communication strategies designed to sow confusion and uncertainty, discourage critical thinking, and persuade people that reality is what the leader says it is”  (p.93).  From Mussolini’s use of newsreels and Hitler’s public rallies to Trump’s use of Twitter, authoritarians have employed “direct communication channels with the public, allowing them to pose as authentic interpreters of the public will”  (p.93).

Propaganda, moreover, encourages people to see violence differently, as a “national and civic duty and the price of making the country great”  (p.166).  General Franco murdered and jailed Spanish leftists at an astounding rate, both in the Spanish Civil war, when he was supported by Mussolini and Hitler, and during World War II, when he remained neutral.  His claim to legitimacy rested on the notion that he had brought peace to the land and saved it from apocalyptic leftist violence.  But his real success, Ben-Ghiat writes, was in “creating silence around memories of his violence” (p.232).

Augusto Pinochet, fashioning himself in the image of Franco, also strove to present an image of Chile as a bastion of anti-communist stability.  But central to Pinochet’s rule was the systematic torture and execution of Chilean dissidents and leftists, “not [as] isolated sadism but state policy” (p.165), according to an Amnesty International report. Pinochet’s secret police agency, the DINA, drew upon neo-Nazis living among the country’s large German population to execute its mission of “cleansing Chilean society of leftist influence and making Chile a center of the international struggle against Marxism” (p.178).

Gaddafi envisioned himself as the center of an anti-imperialist, anti-Zionist world, and bankrolled a wide range of revolutionary and terrorist movements across the globe while adopting terrorist methods at home to eliminate Libyan dissenters. He used television to present violence as mass spectacle, subjecting dissenting students to public, televised hangings, including the  entire trial and execution of a dissident in 1984.  And Donald Trump’s calls for Hillary Clinton’s imprisonment and allusions to her being shot, shocking to many Americans, were “behaviors more readily associated with fascist states or military juntas” (p.62), Ben-Ghiat writes.

Almost invariably, strongmen use the power of their office for private gain, the classic definition of corruption.  In tandem with other tools, such as purges of the judiciary, corruption produces a system that tolerates criminality and encourages broader changes in behavioral norms to “make things that were illegal or immoral appear acceptable, whether election fraud, torture, or sexual assault” (p.144).  The term “kleptocracy,” much in vogue today, refers to a state in which the looting of public treasuries and resources often appears to be the central purpose of government.

Joseph Mubuto Sese Soko, the staunch anti-communist leader of Zaire (now Democratic Republic of Congo) from 1965 to 1997, appears here primarily to illustrate what US Representative Stephen Solarz termed in 1991 the “kleptocracy to end all kleptocracies,” in which Mobutu set the standard by which “all future international thieves will have to be measured” (p.14).   Mobutu’s country, awash in raw materials estimated to be worth in excess of $24 trillion, has the dubious distinction of being the world’s richest resource country with the planet’s poorest population, according to Tom Burgis’ insightful study of kleptocratic African regimes, The Looting Machine (reviewed here in 2016).    By the time he was forced into exile in 1997, Mobutu had amassed a $5 billion fortune, but Zaire had lost $12 billion in capital and resource flight and increased its debt by $14 billion.

New patronage systems allow the strongman’s cronies and family members to amass wealth, offering power and economic reward.  Vladimir Putin places oligarchs in competition for state resources and his favor, treating the country as an entity to be exploited for private gain.  While he poses as a nationalist defender against “globalists,” Putin uses global finance to launder and hide money.  He and his associates have removed an estimated $325 billion from Russia since 2006.  By 2019, 3% of the Russian population held 89% of the country’s financial assets.

Silvio Berlusconi maintained a curious and secretive relationship with Putin that almost certainly benefited him financially, typical of how Berlusconi normalized corruption by bending the institutions of Italian democracy to “accommodate his personal circumstances,” and by “partnering with authoritarians and elevating himself above the law” p.161).  He retained control over his extensive holdings in television, publishing and advertising, putting family members and loyalists in charge.  The vastness of his media empire “made it hard to police his mixing of personal and business interests”  (p.159).  While Italy remained a nominal democracy under Berlusconi, he turned the Italian government into what Ben-Ghiat describes as a “vehicle for accumulating more personal wealth and power on the model of the illiberal leaders he so admired” (p.246), alluding to his particular partnership with Putin and an even stranger partnership with Gaddafi.

Ben-Ghiat goes beyond other discussions of authoritarianism by highlighting the extent to which virility — a cult of masculinity —  enables the strongman’s corruption by projecting the idea that he is “above laws that weaker individuals must follow”  (p.8).  Displays of machismo are “not just bluster, but a way of exercising power at home and conducting foreign policy,” she writes.  Far from being a private affair, the sex lives of strongmen reveal how “corruption, propaganda, violence and virility work together.” (p.120).

In portions of the book most likely to appeal to adolescent males, Ben-Ghiat details the unconstrained sex lives of Mussolini and Gaddafi.   Paradoxically, Gaddafi afforded Libyan women far more independence than they had enjoyed before he came to power in 1969.  He promoted women as part of his revolutionary measures, while privately constructing a system – modeled, apparently, on that of Mussolini – to “procure and confine women for his personal satisfaction” (p.132).

Silvio Berlusconi “used his control of Italy’s television and advertising markets to saturate the country with images of women in submissive roles”  (p.134).  The young female participants in Berlusconi’s famous sex parties often received cash to help them start a business, a chance at a spot in a Berlusconi show, or a boost into politics.   Bare-chested body displays constitute an “integral part” of Vladimir Putin’s identity as the “defender of Russia’s pride and its right to expand in the world”  (p.121), Ben-Ghiat writes.

As to Donald Trump, the infamous Access Hollywood tapes which were released amidst the 2016 presidential campaign, in which he bragged about groping non-consenting women, did not sink his candidacy.  Instead, the revelations “merely strengthened the misogynist brand of male glamor Trump had built over the decades” (p.138).  Trump’s campaign and presidency seemed dedicated to “[r]eclaiming male authority,” Ben-Ghiat contends, which meant “creating an environment in which men can act on their desires with impunity” (p.139).

Gaddafi was the last of the authoritarians who used violence openly as a tool to maintain power.  In the social media age, mass killings often generate bad press. New authoritarians need to gauge the tolerance of elites and the public for violence.  21st-century strongmen like Putin and Recep Erdogan tend to warehouse their enemies out of public scrutiny, preferring targeted violence, information manipulation and legal harassment to neutralize dissenters.  They use platforms like Facebook and Twitter to “target critics and spread hate speech, conspiracy theories, and lies” (p.111), and attempt to impoverish opponents and potential opponents by expropriating businesses they or their relatives might own.

Ben-Ghiat’s book appeared just before the 2020 American presidential election, weeks before the January 6, 2021 insurrection at the US Capitol, and before the notion of a “stolen election” took hold amongst a still-mystifyingly large portion of the American electorate.  But her insight that today’s authoritarians use elections to keep themselves in office, “deploying antidemocratic tactics like fraud or voter suppression to get the results they need” (p.49 ), reveals the extent to which former president Trump and a substantial segment of today’s Republican party, especially in key “battleground” states, are working off the strongman’s playbook.

* * *

After an apt dissection of  the way authoritarianism threatens the world’s democracies, Ben-Ghiat’s proposed solutions may leave readers wanting.  “Opening the heart to others and viewing them with compassion” (p.260) can constitute effective pushback against strongman rule, she argues.  Solidarity, love, and dialogue “are what the strongman most fears” (p.260-61).   More concretely, she emphasizes that to counter contemporary authoritarianism, we must “prioritize accountability and transparency in government” (p.253).   Above all, she recommends a “clear-eyed view of how strongmen manage to get into power and how they stay there” (p.250).  This deeply researched and persuasively argued work provides just such a view, making it a timely contribution to the urgent contemporary debates about the future of democracy.

Thomas H. Peebles

La Châtaigneraie, France

December 30, 2021

 

 

 

 

 

8 Comments

Filed under History, Politics, World History

Looking at the Arab Spring Through the Lens of Political Theory

Noah Feldman, The Arab Winter: A Tragedy

(Princeton University Press)

2011 was the year of the upheaval known as the “Arab Spring,” a time when much of the Arabic-speaking world seemed to have embarked on a path toward democracy—or at least a path away from authoritarian government. The upheaval began in December 2010, when a twenty-six-year-old Tunisian street fruit vendor, Mohamed Bouazizi, distraught over confiscation of his cart and scales by municipal authorities, ostensibly because he lacked a required work permit, doused his body with gasoline and burned himself.  Protests began almost immediately after Bouazizi’s self-immolation, aimed at Tunisia’s autocratic ruler since 1987 Zine El Abidine Ben Ali.  On 14 January 2011, Ben Ali­­, who had fled to Saudi Arabia, resigned.

One month later, Hosni Mubarak­, Egypt’s strongman president since 1981, resigned his office. By that time, protests against ruling autocrats had broken out in Libya and in Yemen. In March, similar protests began in Syria. By year’s end, Yemen’s out-of-touch leader, Ali Abdullah Saleh, had been forced to resign, and Colonel Muammar Qaddafi—who had ruled Libya since 1969—was driven from office and shot by rebels. Only Syria’s Bashar al-Assad still clung to power, but his days, too, appeared numbered.

The stupefying departures in a single calendar year of four of the Arab world’s seemingly most firmly entrenched autocrats sent soaring the hopes of many, including the present writer.  Finally, we said, at last—at long, long last—democracy had broken through in the Middle East. The era of dictators and despots was over in that part of the world, or so we allowed ourselves to think. It did not seem far-fetched to compare 2011 to 1989, when the Berlin Wall fell and countries across Central and Eastern Europe were suddenly out from under Soviet domination.

But as we know now, ten years later, 2011 was no 1989: the euphoria and sheer giddiness of that year turned to despair.  Egypt’s democratically elected president Mohamed Morsi was replaced in 2013 by a military government that seems at least as ruthlessly autocratic as that of Mubarak.  Syria broke apart in an apparently unending civil war that continues to this day, with Assad holding onto power amidst one of the twenty-first century’s most severe migrant and humanitarian crises.  Yemen and Libya appear to be ruled, if at all, by tribal militias and gangs, conspicuously lacking stabilizing institutions that might hold the countries together.  Only Tunisia offers cautious hope of a democratic future. And hovering over the entire region is the threat of brutal terrorism, represented most terrifyingly by the self-styled Islamic State in Iraq and Syria, ISIS.

It is easy, therefore, almost inescapable, to write off the Arab Spring as a failure—to saddle it with what Harvard Law School professor Noah Feldman terms a “verdict of implicit nonexistence” (p.x), as he phrases it in The Arab Winter: A Tragedy.  But Feldman, a seasoned scholar of the Arabic-speaking world, would like us to look beyond notions of failure and implicit nonexistence to consider the Arab spring and its aftermath from the perspective of classical political theory.  Rather than emphasizing chronology and causation, as historians might, political theorists—the “philosophers who make it their business to talk about government” (p.8) —ask a normative question: what is the right way to govern? Looking at the events of 2011 and their aftermath from this perspective, Feldman hopes to change our “overall sense of what the Arab spring meant and what the Arab winter portends” (p.xxi).

In this compact but rigorously analytical volume, Feldman considers how some of the most basic notions of democratic governance—political self-determination, popular sovereignty, political agency, and the nature of political freedom and responsibility—played out over the course of the Arab Spring and its bleak aftermath, the “Arab Winter” of his title.   Feldman focuses specifically on Egypt, Tunisia, Syria, and ISIS, each meriting a separate chapter, with Libya and Yemen mentioned intermittently.  In an introductory chapter, he addresses the Arab Spring collectively, highlighting factors common to the individual countries that experienced the events of the Arab Spring and ensuing “winter.”  In each country, those events took place within a framework defined by  “political action that was in an important sense autonomous” (p.xiii).  

The Arab Spring marked a crucial, historical break from the era in which empires—Ottoman, European and American—were the primary arbiters of Arab politics.  The “central political meaning” of the Arab Spring and its aftermath, Feldman argues, is that it “featured Arabic-speaking people acting essentially on their own, as full-fledged, independent makers of their own history and of global history more broadly” (p.xii).  The forces arrayed against those seeking to end autocracy in their countries were also Arab forces, “not empires or imperial proxies” (p.xii).  Many of the events of the Arab Spring were nonetheless connected to the decline of empire in the region, especially in the aftermath of the two wars fought in Iraq in 1991 and 2003.  The “failure and retreat of the U.S. imperial presence” was an “important condition in setting the circumstances for self-determination to emerge” (p.41).  

 While the massive protests against existing regimes that erupted in Tunisia, Egypt, Syrian, Libya, and Yemen in the early months of 2011 were calls for change in the protesters’ own nation-states, there was also a broader if somewhat vague sense of trans-national Arab solidarity to the cascading calls for change.  By “self-consciously echoing the claims of other Arabic-speaking protestors in other countries,” Feldman argues, the protesters were “suggesting that a broader people—implicitly the Arab people or peoples —were seeking change from the regime or regimes . . . that were governing them” (p.2). The constituent peoples of a broader trans-national Arab “nation” were rising, “not precisely together but also not precisely separately” (p.29).  

 The early-2011 protests were based on the claim that “the people” were asserting their right to take power from the existing government and reassign it, a claim that to Feldman “sounds very much like the theory of the right to self-determination” (p.11).  The historian and the sociologist would immediately ask who was making this “grand claim on behalf of the ‘people” (p.11).  But to the political theorist, the most pressing question is “whether the claim was legitimate and correct” (p.11).   Feldman finds the answer in John Locke’s Second Treatise of Government, first published in 1689. Democratic political theory since the Second Treatise has strongly supported the idea that the people of a constituent state may legitimately seize power from unjust and undemocratic rulers. Such an exercise of what could be termed the right to revolution is “very close to the central pillar of democratic theory itself” (p.11).   Legitimate government “originates in the consent of the governed;” a government not derived from consent “loses its legitimacy and may justifiably be replaced” (p.12).  The Egypt of the Arab Spring provides one of recent-history’s most provocative applications of the Lockean right to self-determination. 

* * *

Can a people which opted for constitutional democracy through a legitimate exercise of its political will opt to end democracy through a similarly legitimate exercise of its political will?  Can a democracy vote itself out of existence?  In his chapter on Egypt, Feldman concludes that the answer to these existential questions of political theory is yes, a conclusion that he characterizes as “painful” (p.59).  Just as massive and legitimate protests in Cairo’s Tahrir Square in January 2011 paved the way for forcing out aging autocrat Hosni Mubarak, so too did massive and legitimate protests in the same Tahrir Square in June 2013 pave the way for forcing out democratically-elected president Mohamed Morsi.

Morsi was a member of the Muslim Brotherhood—a movement banned under Mubarak that aspired to a legal order frequently termed “Islamism,” based upon Sharia Law and the primacy of the Islamic Quran.  Morsi won the presidency in June 2012 by a narrow margin over a military-affiliated candidate, but was unsuccessful almost from the beginning of his term.  In Feldman’s view, his most fatal error was that he never developed a sense of a need to compromise.  “If the people willed the end of the Mubarak regime, the people also willed the end of the Morsi regime just two and a half years later” (p.59),  he contends. The Egyptian people rejected constitutional democracy, “grandly, publicly, and in an exercise of democratic will” (p.24).  While they may have committed an “historical error of the greatest consequence by repudiating their own democratic process,” that was the “choice the Egyptian people made” (p.63).

Unlike in Egypt, in Tunisia the will of the people—what Feldman terms “political agency”—produced what then appeared to be a sustainable if fragile democratic structure.  Tunisia succeeded because its citizens from across the political spectrum “exercised not only political agency but also political responsibility” (p.130).  Tunisian protesters, activists, civil society leaders, politicians, and voters all “realized that they must take into account the probable consequences of each step of their decision making” (p.130).  

Moving the country toward compromise were two older politicians from opposite ends of the political spectrum: seventy-two-year-old Rached Ghannouchi, representing Ennahda—an Islamist party with ties to the Egyptian Muslim Brotherhood—and Beji Caid Essebsi, then eighty-five, a rigorous secularist with an extensive record of government service.  Together, the two men led a redrafting of Tunisia’s Constitution, in which Ennahda dropped the idea of Sharia Law as the foundation of the Tunisian State in favor of a constitution that protected religion from statist dominance and guaranteed liberty for political actors to “promote religious values in the public sphere”—in short, a constitution that was “not simply democratic but liberal-democratic” (p.140).  

Tunisia had another advantage that Egypt lacked: a set of independent civil society institutions that had a “stake in continued stability,” along with a “stake in avoiding a return to autocracy” (p.145).  But Tunisia’s success was largely political, with no evident payoff in the country’s economic fortunes. The “very consensus structures that helped Tunisia avoid the fate of Egypt,” Feldman warns, ominously but presciently, have “created conditions in which the underlying economic causes that sparked the Arab spring protests have not been addressed” (p.150).   

As if to prove Feldman’s point, this past summer Tunisia’s democratically-elected President Kais Saied, a constitutional law professor like Feldman, froze Parliament and fired the Prime Minister, “vowing to attack corruption and return power to the people. It was a power grab that an overwhelming majority of Tunisians greeted with joy and relief,” The New York Times reported.  One cannot help but wonder whether Tunisia is about to confront and answer the existential Lockean question in a manner similar to Egypt a decade ago.

Protests against Syrian President Bashar al-Assad began after both Ben Ali in Tunisia and Mubarak in Egypt had been forced out of office, and initially seemed to be replicating those of Tunisia and Egypt.  But the country degenerated into a disastrous civil war that has rendered the country increasingly dysfunctional.  The key to understanding why lies in the country’s denominational-sectarian divide, in which the Assad regime—a minority-based dictatorship of Alawi Muslims, followers of an off-shoot of Shiite Islam representing about 15 % of the Syrian population—had disempowered much of the country’s Sunni majority.  Any challenge to the Assad regime was understood, perhaps correctly, as an existential threat to Syria’s Alawi minority.  Instead of seeking a power-sharing agreement that could have prolonged his regime, Bashar sought the total defeat of his rivals.  The regime and the protesters were thus divided along sectarian lines and both sides “rejected compromise in favor of a winner-take-all struggle for control of the state” (p.78). 

The Sunnis challenging Assad hoped that Western powers, especially the United States, would intervene in the Syrian conflict, as they had in Libya.  United States policy, however, as Feldman describes it, was to keep the rebel groups “in the fight, while refusing to take definitive steps that would make them win.”  As military strategy, this policy “verged on the incoherent”  (p.90).  President Barack Obama wanted to avoid political responsibility for Bashar’s fall, if it came to that, in order to avoid the fate of his predecessor, President George W. Bush, who was considered politically responsible for the chaos that followed the United States intervention in Iraq in 2003.  But the Obama strategy did not lead to stability in Syria.  It had an opposite impact, notably by creating the conditions for the Islamic State, ISIS, to become a meaningful regional actor.

ISIS is known mostly for its brutality and fanaticism, such as beheading hostages and smashing precious historical artifacts.  While these horrifying attributes cannot be gainsaid, there is more to the group that Feldman wants us to see.  ISIS in his view is best understood as a utopian, revolutionary-reformist movement that bears some similarities to other utopian revolutionary movements, including John Calvin’s Geneva and the Bolsheviks in Russia in the World War I era.  The Islamic State arose in the aftermath of the failure and overreach of the American occupation of Iraq.  But it achieved strategic relevance in 2014 with the continuing breakdown of the Assad regime’s sovereignty over large swaths of Syrian territory, creating the possibility of a would-be state that bridged the Iraq-Syria border.  Without the Syrian civil war, “there would have been no Islamic State” (p.107), Feldman argues.

The Islamic State attained significant success through its appeal to Sunni Muslims disillusioned with modernist versions of political Islam of the type represented by the Muslim Brotherhood in Egypt and Ennahda in Tunisia.  With no pretensions of adopting democratic values and practices, which it considered illegitimate and un-Islamic, ISIS sought to take political Islam back to pre-modern governance.  It posited a vision of Islamic government for which the foundation was the polity “once ruled by the Prophet and the four ‘rightly guided’ caliphs who succeeded him in the first several decades of Islam” (p.102).

But unlike Al-Qaeda or other ideologically similar entities, the Islamic State actually conquered and held enough territory to set up a functioning state in parts of Syria.  Until dislodged by a combination of Western air power, Kurdish and Shia militias supported by Iran, and active Russian intervention, ISIS was able to put into practice its revolutionary utopian form of government.  As a “self-conscious, intentional product of an organized group of people trying to give effect to specific political ideas and to govern on their basis,” ISIS represents for Feldman the “strangest and most mystifying outgrowth of the Arab spring” (p.102).

* * *

Despite dispiriting outcomes in Syria and Egypt, alongside those of Libya and Yemen, Feldman is dogged in his view that democracy is not doomed in the Arabic-speaking world.  Feldman’s democratic optimism combines Aristotle’s notion of “catharsis,” a cleansing that comes after tragedy, with the Arabic notion of tragedy itself, which can have a “practical, forward looking purpose. It can lead us to do better” (p.162).  The current winter of Arab politics “may last a generation or more,” he concludes.  “But after the winter—and from its depths—always comes another spring” (p.162).  But a generation, whether viewed through the lens of the political theorist or that of the historian, is a long time to wait for those Arabic-speaking people yearning to escape autocracy, civil war, and terrorist rule.

Thomas H. Peebles 

Bethesda, Maryland 

November 10, 2021 

 

6 Comments

Filed under Middle Eastern History, Political Theory

Love Actually

 

Ann Heberlein, On Love and Tyranny:

The Life and Politics of Hannah Arendt

Translated from Swedish by Alice Menzies (Pushkin Press, 2021)

Before she became a celebrated New York public intellectual, Hannah Arendt (1906-1975) lived through some of the 20th century’s darkest moments. She fled her native Germany after Hitler came to power in 1933, living in France for several years.  In 1940, she spent time in two intern camps, then departed for the United States, where she resided for the second half of her life.  In 1950, Arendt became an American citizen, ending nearly two decades of statelessness.  The following year, she established her reputation as a serious thinker with The Origins of Totalitarianism, a trenchant analysis of how oppressive one-party systems came to rule both Nazi Germany and the Soviet Union in the first half of the 20th century.  As a commentator observed in The Washington Post, Arendt’s work diagnosed brilliantly the “forms of alienation and dispossession that diminished human dignity, threatened freedom and fueled the rise of authoritarianism.”

The Origins of Totalitarianism was one of a handful of older works that experienced a sudden uptick in sales in early 2017, after Donald Trump became president of the United States (George Orwell’s 1984 was another).  The authoritarian impulses that Arendt explained and Trump personified seem likely to be with us for the foreseeable future, both in the United States and other corners of the world.  For that reason alone, a fresh look at Arendt is welcome.  That is the contribution of  Ann Heberlein, a Swedish novelist and non-fiction writer, with On Love and Tyranny: The Life and Politics of Hannah Arendt.  

Heberlein’s work, ably translated from the original Swedish by Alice Menzies, constitutes the first major Arendt biography since 1982, when Elisabeth Young-Bruehl’s highly-acclaimed but dense Hannah Arendt: For Love of the World first appeared.  On Love and Tyranny, by contrast, is easy to read yet hits all the highlights of Arendt’s life and work.  Disappointingly, there are no footnotes and little in the way of bibliography. Heberlein makes use of the diaries of a key if problematic figure in Arendt’s life, philosopher Martin Heidegger, which only became public in 2014 and cast additional light on Heidegger’s Nazi sympathies.  But it is difficult to ascertain from the book itself what other new or different sources Heberlein utilized that might have been unavailable to Young-Bruehl.

Although Arendt studied philosophy as a university student, she preferred to describe herself as a political theorist.  But despite the reference to politics in her title, Heberlein’s portrait accents Arendt’s philosophic side.  She emphasizes how the turbulent circumstances that shaped Arendt’s life forced her to apply in the real world many of the abstract philosophical and moral concepts she had wrestled with in the classroom.  As the title suggests, these include love and tyranny,  but also good vs. evil, truth, obligation, responsibility, forgiveness, and reconciliation.

At Marburg University, where she entered in 1924 as an 18-year-old first year student, Arendt not only studied philosophy under Heidegger, already a rising star in German academic circles, but also began a passionate love affair with the man.  Heidegger was then nearly twice her age and married with two young sons (their affair is detailed in Daniel Maier-Katkin’s astute Stranger from Abroad, Hannah Arendt, Martin Heidegger: Friendship and Forgiveness, reviewed here  in 2013).   Arendt left Heidegger behind when she fled Germany in 1933, but after World War II re-established contact with her former teacher, by then disgraced because of his association with the Nazi regime. A major portion of Heberlein’s work scrutinizes Arendt’s subsequent, post-war relationship with Heidegger.

Heberlein also zeroes in on Arendt’s very different post-war relationship to a seemingly very different man, Adolph Eichmann, Hitler’s loyal apparatchik who was responsible for moving approximately 1.5 million Jews to Nazi death camps.  Arendt’s series of articles for The New Yorker on Eichmann’s trial in Jerusalem in 1961 became the basis for another of her best-known works, Eichmann in Jerusalem: A Report on the Banality of Evil, published in 1963, in which she portrayed Eichmann as neither a fanatic nor a pathological killer, but rather a stunningly mediocre individual, motivated more by professional ambition than by ideology.

The phrase “banality of evil,” now commonplace thanks to Arendt, followed her for the rest of her days. How the phrase applies to Eichmann is of course well-ploughed ground, to which Heberlein adds a few insights.  Less obviously, Heberlein lays the groundwork to apply the phrase to Heidegger.  Her analysis of the banality of evil suggests that the differences between Heidegger and Eichmann were less glaring in the totalitarian Nazi environment, where whole populations risked losing their ability to distinguish between right and wrong.

* * *

Arendt was the only child of Paul and Martha Arendt, prosperous, progressive, and secular German Jews.  Paul died when Hannah (born Johanna) was 7, but she remained close to her mother, who immigrated with her to the United States in 1941. Meeting with Heidegger as a first-year student in 1924 was for Arendt “synonymous with her entry into the world of philosophy,” Heberlein writes.  Heidegger was “The Philosopher personified: brilliant, handsome, poetic, and simply dressed” (p.28).  The Philosopher made clear to the first-year student that he was not prepared to leave his wife and family or the respectability of his academic position for her.  She met him whenever he had time and was able to escape his wife.

The unbalanced Arendt-Heidegger relationship “existed solely in the shadows: never acknowledged, never visible”, (p.40) as Heberlein puts it.  Arendt was never able to call Heidegger her partner because she “possessed him for brief intervals only, and the fear of losing him was ever-present” (p.41).   Borrowing a perspective Heberlein attributes to Kierkegaard and Goethe, she describes Arendt’s love for Heidegger as oscillating “between great joy and deep sorrow—though mostly sorrow” (p.31).  For these writers, whom Arendt knew well, love consisted “largely of suffering, of longing, and of distance” (p.31).  The 18-year-old, Heberlein concludes, was “struck down by a passion, possibly even an obsession, that would never fade” (p.31).

Arendt left Marburg after one year, ending up at Heidelberg University.  She later admitted that she needed to get away from Heidegger.  But she continued to see him while she wrote her dissertation at Heidelberg on St. Augustine’s conception of love. Her advisor there was the esteemed theologian and philosopher Karl Jaspers, with whom she remained friends up to his death in 1969.

After university, Arendt worked in Berlin, where she met Gunther Stern, a journalist, poet and former Heidegger student who was closely associated with the communist Berthold Brecht.  Arendt married Stern in 1929 at age 23.  Sometime during her period in Berlin, she cut off all contact with Heidegger.  But after the Nazis came to power, Arendt began hearing alarming rumors about several specific anti-Semitic actions attributed to Heidegger at Fribourg University, where he had been appointed rector.  She asked him in a letter to clarify by responding to the rumors, and received back a self-pitying, aggressive response that she found entirely unconvincing.

1933 was also the year Arendt and her mother left Germany and wound up in Paris. There she met Heinrich Blücher, a self-taught, left wing German Jewish political activist. She and Stern had by then been living apart for several years, and she divorced him to marry Blücher in early 1940. The couple remained together until Blücher’s death in 1970. They were sent to separate intern camps just prior to the fall of France in 1940, but escaped together through Spain to Portugal, where they immigrated to the United States in 1941 and settled in New York.

Arendt’s first return trip to Europe came in late 1949 and early 1950.  With Blücher’s approval, she sought out her former teacher, then in Fribourg, meeting with Heidegger and his wife Elfried in February 1950.  Understandably suspicious, Elfried seems to have understood that Arendt was in a position to help rehabilitate her husband, besmirched by his association with the Nazi regime, and accepted that he wanted Arendt to again be part of his life.  Arendt maintained a warm relationship with her former professor until her death in 1975 (Heidegger died less than a year later), writing regularly and meeting on several occasions.

In the post-war years, as Arendt’s star was rising, she became Heidegger’s unpaid agent, working to have his writings translated into English and negotiating contracts on his behalf.  She also became an enthusiastic Heidegger defender, going to great lengths to excuse, smooth over, and downplay his association with Nazism.  She once compared Heidegger to Thales, the ancient Greek philosopher who was so busy gazing at the stars that he failed to notice that he had fallen into a well.

On the occasion of Heidegger’s 80th birthday in 1969, she delivered an over-the-top tribute to her former professor, reducing Heidegger’s dalliances with Nazism to a “10-month error,” which in her view he corrected quickly enough, “more quickly and more radically than many of those who later sat in judgment over him” (p.236).  Arendt argued that Heidegger had taken “considerably greater risks than were usual in German literary and university life during that period” (p.237).  As Heberlein points out, Arendt’s tribute was a counter-factual fantasy: there was no empirical support for this whitewashed version of the man.

Heidegger had openly endorsed Nazi “restructuring” of universities to exclude Jews when he became rector at Fribourg in 1933 and his party membership was well known. His diaries, published in 2014, made clear that he was aware of the Holocaust, believed it was at least partly the Jews’ fault and, even though he ceased to be active in party affairs sometime in the mid-1930s, remained until 1945 a “fully paid-up, devoted supporter of Adolph Hitler” (p.238).  Arendt of course didn’t have access to these diaries when she rose to Heidegger’s defense, but it seems unlikely they would have changed her perspective.

Arendt’s 1969 tribute left little doubt she had found her way to forgive Heidegger for his association and support for a regime that had murdered millions of her fellow Jews, wreaked destruction on much of Europe, and forced her to flee her native country to start her life anew an ocean away. But why? Heberlein writes that forgiveness for Arendt was the conjunction of the conflicting powers of love and evil.  “Without evil, without betrayal, insults and lies, forgiveness would be unnecessary; without love, forgiveness would be impossible” (p.225).  Arendt found the strength to forgive Heidegger in the “utterly irrational emotion” that was love. Her love for Heidegger was “strong, overwhelming, and desperate. The power of the passion Hannah felt for Martin was stronger than the sorrow she felt at his betrayal” (p.226).  But whether it was right or wrong for her to forgive Heidegger, Heberlein demurely concludes, is a question only Arendt could have answered.

Did Arendt also forgive Eichmann for his direct role in transporting a staggering number of Jews to death camps? Is forgiveness wrapped within the notion of the banality of evil? Daniel Maier-Katkin suggests in his study of the Arendt-Heidegger relationship that in her experience with Heidegger, Arendt may have come to the notion of the banality of evil “intuitively and without clear articulation.”  That experience may have prepared her to comprehend that each man had been “transformed by the total moral collapse of society into an unthinking cog in the machinery of totalitarianism.”

Heberlein’s analysis of Eichmann leads to the conclusion that the notion of the banality of evil was sufficiently elastic to embrace Heidegger.  Heberlein sees the influence of Kant’s theory of “radical evil” in Arendt’s notion of the banality of evil.  For Arendt, as for Kant, evil is a form of temptation, in which the desires of individuals overrule their “duty to listen to, and act in accordance with, goodwill” (p.198).   The antidote to evil is not goodness but reflection and responsibility.  Evil grows when people “cease to think, reflect, and choose between good and evil, between taking part or resisting” (p.138).  Arendt’s sense of evil recognizes an uncomfortable truth that seems as applicable to   Heidegger as to Eichmann, that most people have a tendency to:

follow the path of least resistance, to ignore their conscience and do what everyone else is doing.  As the exclusion, persecution, and ultimately, annihilation of Jews became normalized, there were few who protested, who stood up for their own principles (p.199).

For Arendt, forgiveness of such persons is possible. But not all evil can be explained in terms of obedience, ignorance, or neglect. There is such a thing as evil that is “as incomprehensible as it is unforgiveable” (p.200).   In Heberlein’sinterpretation of Arendt, the genuinely evil person is the one who is “leading the way, someone initiating the evil, someone creating the context, ideology, or prejudices necessary for the obedient masses to blindly adopt” (p.201).  Whether Eichmann falls outside this standard for genuine evil is debatable. But the standard could comfortably exclude Heidegger, as Arendt had in effect argued in her 1969 tribute to her former teacher.

Arendt compounded her difficulties with the separate argument in Eichmann in Jerusalem that the Jewish councils that the Nazis established in occupied countries cooperated in their own annihilation.  The “majority of Jews inevitably found themselves confronted with two enemies – the Nazi authorities and the Jewish authorities,” Arendt wrote.  The “pathetic and sordid” behavior of Jewish governing councils was for Arendt the “darkest chapter” of the Holocaust – darker than the mass shootings and gas chambers — because it “showed how the Germans could turn victim against victim.”

The notion that Arendt was blaming the Jews for their persecution “quickly took hold,” Heberlein writes, and she was “forced to put up with questions about why she thought the Jews were responsible for their own deaths, in virtually every interview until she herself died” (p.192).  After Eichmann in Jerusalem, Arendt was shunned by many former colleagues and friends, repeatedly accused of being an anti-Israel, self-hating Jew, “heartless and devoid of empathy . . . cold and indifferent” (p.192).  When her husband died in 1970, Arendt’s isolation increased.  She was again in exile, this time existential, which surely enhanced her emotional attachment to Heidegger, the sole remaining link to the world of her youth.

* * *

Arendt’s ardent post-war defense of Heidegger, while generating little of the brouhaha that surrounded Eichmann in Jerusalem, is also a critical if puzzling piece in understanding her legacy.  Should we consider the continuation of her relationship with Heidegger as the simple but powerful triumph of Eros, an enduring schoolgirl crush that even the horrors of Nazism and the Holocaust were unable to dispel?  Heberlein’s earnest biography points us inescapably in this direction.

Thomas H. Peebles

La Châtaigneraie, France

October 12, 2021

[NOTE: A nearly identical version of this review has also been posted to the Tocqueville 21 blog  maintained in connection with the American University of Paris’ Tocqueville Review and its Center for Critical Democracy Studies]

 

 

4 Comments

Filed under History, Intellectual History, Political Theory

American Polarizer

 

 

 

James Shapiro, Shakespeare in a Divided America:

What His Plays Tell Us About Our Past and Our Future

(Penguin Press, 2020)

In June 2017, New York City’s Public Theater staged a production in Central Park of William Shakespeare’s Julius Caesar, directed by Oskar Eustis, as part of the series known as Shakespeare in the Park.  As in many 21st century Shakespeare productions, non-whites had several leading roles and women played men’s parts.  Eustis’ Caesar, knifed to death in Act III, bore more than passing resemblance to President Donald J. Trump: he had strange blond hair, wore overly long red ties, tweeted from a golden bathtub, and had a wife with a Slavic accent.

A protestor interrupted one of the early productions, jumping on stage after the assassination of Caesar to shout, “This is violence against Donald Trump,” according to The New York Times.  Breitbart News picked up on the story with the headline “’’Trump’ stabbed to death.”  Fox News weighed in, expressing concern that the play encouraged violence against the president.  Corporate sponsors pulled out.  Threats were levied not only against the Public Theater and its actors, but also against other Shakespeare productions throughout the country.  A fierce but unedifying battle was fought on social media, with little regard for the ambiguities underlying Caesar’s assassination in the play.

The polemic engendered by Eustis’ Julius Caesar unsettled Columbia University Professor James Shapiro, one of academia’s foremost Shakespeare experts.  Shapiro also serves as Shakespeare Scholar in Residence at the Public Theater and in that capacity had advised Eustis’ team on some of the play’s textual issues. His most recent work, Shakespeare in a Divided America: What His Plays Tell Us About Our Past and Our Future, constitutes his response to the polemic, in which he demonstrates convincingly that the frenzied reaction to the 2017 Julius Caesar performance was no aberrational moment in American history.

Starting and finishing with the 2017 performance, Shapiro identifies seven other historical episodes in which a Shakespeare play has been enmeshed in the nation’s most divisive issues: racism, slavery, class conflict, nationalism, immigration, the role of women, adultery and same sex love.  Each episode constitutes a separate chapter with a specific year. Shapiro dives deeply and vividly into the circumstances surrounding all seven, revealing a flair for writing and recounting American history that rivals what he brings to his day job as an interpreter of Shakespeare, his plays and his age.  Of the seven episodes, the most gripping is his description of the 1849 riot at New York City’s upscale Astor Place Opera House, one of the worst in the city’s history up to that point.  By comparison, the 2017 brouhaha over Julius Caesar seems like a Columbia graduate school seminar on Shakespeare.

* * *

Fueled by raw class conflict, nationalism and anti-British sentiment, the Astor Place riot was described in one newspaper as the “most sanguinary and cruel [massacre] that has ever occurred in this country,” an episode of “wholesale slaughter” (p.49)— all arising out of competing versions of Macbeth, starring competing actors.  The Briton William Macready, performing as Macbeth at Astor Place, and the American Edwin Forrest, simultaneously rendering Macbeth at the Bowery Theatre, only a few blocks away but in a decidedly rougher part of town, offered opposing approaches to playing Macbeth that seemed to highlight national differences between the United States and Great Britain: Forrest, the “brash American, Macready the sensitive Englishman” (p.66).  Macready’s “accent, gentle manliness, and propriety represented a world that was being overtaken by everything that Forrest, guiding spirit of the new and for many coarser age of Manifest Destiny, represented”  (p.66), Shapiro writes.

Shapiro’s description of the riot underscores how theatres in a rapidly growing New York City in the 1840s were democratic meeting points.  They were  “one of the few places in town where classes and races and sexes, if they did not exactly mingle, at least shared a common space. This meant, in practice, that the inexpensive benches in the pit were filled mostly by the working class, the pricier boxes and galleries were occupied by wealthier patrons, and in the tiers above, space was reserved for African Americans and prostitutes” (p.56).  The Astor Place Opera House, built in 1847, was an explicit response of New York’s upper crust to these democratizing tendencies. It did not admit unaccompanied women – there was no place for prostitutes – and it imposed a dress code.  The new rules were seen as fundamentally undemocratic, especially to the city’s large number of recent German and Irish immigrants.

While Forrest opened at the Bowery, Forrest fans somehow obtained tickets to the opening Astor Place performance—who paid for them, Shapiro indicates, remains a mystery—and began heckling Macready, telling him to get off the stage, “you English fool.”  Three days later, the heckling recurred.  But this time a crowd of about 10,000 had gathered outside, an unruly mix of Irish immigrants and native-born Americans, groups that had common cause in anti-English and anti-aristocratic sentiment (many of the Irish immigrants were escaping the Irish potato famine of the mid-1840s, often attributed to harsh British policies; see my 2014 review here of John Kelly’s The Graves Are Walking: The Great Famine and the Saga of the Irish People).  Incited by political leaders and their cronies, the crowd began to throw bricks and stones. They fought a battle with police that continued for several days, with dozens of deaths on both sides.

There were “no winners in the Astor Place riots,” Shapiro writes. The mayhem “brought into sharp relief the growing problem of income inequality in an America that preferred the fiction that it was still a classless society” (p.76).  But the riots also spoke to an “intense desire by the middle and lower classes to continue sharing the public space [of the theatre], and to oppose, violently if necessary, efforts to exclude them from it.  Shakespeare continued to matter and would remain common cultural property in America” (p.78).

In two other powerful chapters, Shapiro demonstrates how Shakespeare’s plays also intertwined with mid-19thcentury America’s excruciating attempts to come to terms with racism and slavery.  One examines abolitionist former president John Quincy Adams’ public feud in the 1830s over what he considered the abominable inter-racial relationship Shakespeare depicts in Othello between Desdemona and the dark-skinned Othello.  In the second, Shapiro shows how, in a twist that was itself Shakespearean, fate linked President Abraham Lincoln, a man who loved Shakespeare and identified with Macbeth, to his assassin, second-rate Shakespearean actor John Wilkes Booth, himself obsessed with both Julius Caesar and what he perceived as Lincoln’s efforts to undermine the supremacy of the white race.

John Quincy Adams, who served as president from 1825 to 1829, found Desdemona’s physical intimacy with Othello, known at the time as “amalgamation” (“miscegenation” did not enter the national vocabulary until the 1860s), to be an “unnatural passion” against the laws of nature.  Adams’ views might have gone largely unnoticed but for a dinner party in 1833, in which the 66 year old former president was seated next to 23 year old Fanny Kemble, a rising young Shakespearean actress from England.  Adams apparently thrust his views of the Othello-Desdemona relationship upon the unsuspecting Kemble.

Two years later, Kemble published a journal about her trip to the United States, in which she described her dinner conversation with the former president.  A piqued Adams felt compelled to respond, elaborating in print about how repellent he found the Desdemona-Othello relationship. The dinner conversation of two years earlier between the ex-president and the rising British actress thus became national news and, with it, Adams’ anxieties about not only the dangers of race-mixing but also the threat posed by disobedient women.

Yet, the ex-president who was so firmly against amalgamation was also a firm abolitionist.  Adams’ abolitionist convictions, Shapiro writes, “seem to have required a counterweight, and he found it in this repudiation of amalgamation” (p.20).  By directing his hostility at Desdemona rather than Othello, moreover, Adams astutely sidestepped criticizing black men, and it “proved more convenient to attack a headstrong young fictional woman than a living one” (p.20).  Although a prolific writer, Adams’ public feud with Kemble represented his sole written attempt to square his disgust for interracial marriage with his abolitionist convictions, and he chose to do so “only through his reflections on Shakespeare” (p.20).

Abraham Lincoln, from humble frontier origins with almost no formal schooling, developed a life-long passion for Shakespeare as a youth.  Shapiro notes that the adult Lincoln regularly asked friends, family, government employees, and relative strangers to listen to him recite, sometimes for hours on end – and then discuss – the same few passages from Shakespeare again and again.  John Wilkes Booth too grew up with Shakespeare, but in altogether different circumstances.

Booth’s father owned a farm in rural Maryland but was also a leading English Shakespearean actor who immigrated to the United States and became a major figure on the American stage.  His three sons followed in their father’s footsteps, with older brothers Edwin and Julius attaining genuine star status, a status that eluded their younger brother John.  Although Maryland was a border state that did not join the Confederacy, John, who had been convinced from his earliest years that whites were superior to blacks, was naturally drawn to the Southern cause.

In 1864, both the year of Lincoln’s re-election and the 300th anniversary of Shakespeare’s birth, Booth was stalking Lincoln and plotting his removal with Confederate operatives.  Lincoln, who had less than six months to live when he was re-elected in November, found himself brooding more and more about Macbeth in his final months, and especially about the murdered King Duncan.  Through his reflection upon the guilt-ridden Macbeth, Shapiro writes, Lincoln felt the “deep connection between the nation’s own primal sin, slavery, and the terrible cost, both collective and personal, exacted by it” (p.113)

After Booth assassinated Lincoln at Ford’s Theater in Washington in April 1865, many of Lincoln’s enemies likened the assassin, whose favorite play was Julius Caesar, to Brutus as a man who killed a tyrant.  But Macbeth proved to be the play that the nation settled on to “give voice to what happened, and define how Lincoln was to be  remembered”(p.116).  Booth had “failed to anticipate that the man he cold-bloodedly murdered would be revered like Duncan, his faults forgotten” (p.118).  For a divided America, the universal currency of Shakespeare’s words offered what Shapiro terms a “collective catharsis” which permitted a “blood-soaked nation to defer confronting once again what Booth declared had driven him to action: the conviction that American ‘was formed for the white not for the black man’” (p.118).

The year 1916 was the 300th anniversary of Shakespeare’s death, a year in which one of his least known plays, The Tempest, was used to bolster the case for anti-immigration legislation. The Tempest centers on Caliban, who is left behind, rather than on those who immigrate.  But the point is the same, Shapiro argues: a “more hopeful community . . . depends on somebody’s exclusion” (p.125).  This notion resonated in particular with Massachusetts Senator Henry Cabot Lodge, an avid Shakespeare reader who led the early 20th century anti-immigration campaign.

The unusual number of performances of The Tempest during that tercentenary year meshed with the fierce debate that Lodge led in Congress over immigration.  The legislation that passed the following year curtailed the influx into the United States of immigrants representing “lesser races,” most frequently a reference to Southern and Eastern Europeans. “How Shakespeare and especially The Tempest were conscripted by those opposed to the immigration of those deemed undesirable is a lesser known part of this [immigration] story” (p.124), Shapiro writes.

Closer to the present, Shapiro has chapters on the 1948 Broadway musical, play, Kiss Me, Kate, later a film, about the cast of Shakespeare’s The Taming of the Shrew, which raised the issue of the roles of women in a post-war society; and on the 1998 film Shakespeare in Love, by far the most successful film to date about Shakespeare or any of his plays, which began as a film about same-sex love but evolved into one about adultery.

Kiss Me, Kate takes place at the backstage of a performance of The Taming of the Shrew.  With music and lyrics provided by Cole Porter, the Broadway musical contrasted the emerging, post-World War II view of the role of women with the conventional stereotyped gender roles in the Shakespeare play itself, thereby featuring “rival visions of the choices women faced in postwar America” (p.160).  In Shakespeare’s play, “women are urged to capitulate and their obedience to men is the norm,” while backstage “independence and unconventionality hold sway” (p.160).  Kiss Me, Kate deftly juxtaposed a “front stage Shakespeare world that mirrored the fantasy of a patriarchal, all-white America” with a backstage one that was “forthright about a woman’s say over her desires and her career” (p.162).

In the earliest version of the film Shakespeare in Love in 1992, Will found himself attracted to the idea of same sex attraction (he was actually attracted to a woman dressed as a man, but the point was that Will thought she was a he).  But same sex love was reduced to a mere hint in the final version, about how the unhappily married Will’s affair with another woman, Viola, helped him overcome his writer’s block, finish Romeo and Juliet, and go on to greatness.  Those creating and marketing Shakespeare in Love, Shapiro writes, “clearly felt that a gay or bisexual Shakespeare was not something that enough Americans in the late 1990s were ready to accept” (p.194).  For box-office success, “Shakespeare could be an adulterer, but he had to be a heterosexual one in a loveless marriage” (p.194).

Shakespeare in Love ends with Viola leaving Will and England for America, reinforcing a myth that persisted from the 1860s through the 1990s of a direct American connection to Shakespeare  — anti-immigration Senator Lodge was one of its most exuberant proponents.  This fantasy, Shapiro writes, speaks to our desire to “forge a physical connection between Shakespeare and America” as the land where his “inspiring legacy came to rest and truly thrived” (p. 193).

* * *

While finding no credible evidence for a direct American  connection to Shakespeare, Shapiro sees a legacy in Shakespeare’s plays that should inspire Americans of all hues and stripes.  Pained by the polarization he witnessed at the 2017 Julius Caesar performance, Shapiro expresses the hope that his book might “shed light on how we have arrived at our present moment, and how, in turn, we may better address that which divides and impedes us as a nation” (p.xxix).  The hope seems forlorn in light of the examples he so brilliantly details, pointing mostly in the other direction: a Shakespeare on the cutting edge of America’s social and political divisions, with his plays often doing the cutting.

Thomas H. Peebles

Paris, France

September 19, 2021

[NOTE: A nearly identical version of this review has also been posted to the Tocqueville 21 Blog, maintained in connection with the American University of Paris’ Tocqueville Review and its Center for Critical Democracy Studies]

 

 

 

10 Comments

Filed under American Politics, American Society, Literature, Politics, United States History

Alarming Portrait of a Ruthlessly Ambitious Crown Prince

 

 

Ben Hubbard, MBS: The Rise to Power of Mohammed Bin Salman

(Tim Dugan Books)

Mohammed Bin Salman, better known by his initials, MBS, is today the Crown Prince of Saudi Arabia and seems poised to become Saudi King upon the death of his ailing father, 86 year old Salman bin Abdulaziz.  Still youthful at age 36, MBS has achieved what appears to be unchallenged power within the mysterious desert kingdom, the birthplace of Islam and the location of its two most holy sites.   Internationally, MBS is indelibly associated with the gruesome October 2018 murder of Saudi journalist Jamal Khashoggi, a murder he probably ordered, but if not almost certainly enabled.  Even apart from the Khashoggi killing, the Saudi Crown Prince has compiled a record that is awash in contradictions since his ascent to power began in 2015.

MBS seems bent on modernizing and diversifying the oil-dependent Saudi economy. He has taken highly publicized steps against corruption; clipped the wings of the clergy and religious police; and accorded Saudi women the right to drive.  Young Saudis appreciate that MBS is largely responsible for movie theatres opening and rock concerts now taking place in their country.  But MBS’s record is also one of brutal suppression of opponents, potential opponents and dissidents  – brutal even by Saudi standards.  His regime seems to be borrowing from the authoritarian Chinese model of extensive economic modernization, accompanied by limited and tightly controlled social liberalization, all without feigning even nominal interest in political democratization.  Saudi Arabia under MBS remains, like China, one of the world’s least democratic societies.

In MBS: The Rise to Power of Mohammed Bin Salman, Ben Hubbard, a journalist for The New York Times with extensive experience in Saudi Arabia and the Middle East, has produced the first — and to date only — biography of the Saudi Crown Prince available in English.  Any biography of MBS is bound to be incomplete, given the wall of secrecy MBS has built up around himself, shielding much of the detail of what he has done and how he operates within the generally secretive royal Saudi circles.  But somehow Hubbard managed to scale that wall.  Using a wide array of sources, many anonymous, he has pieced together a remarkably easy-to-read yet riveting and alarming portrait of a man who has eliminated all apparent sources of competition.

Today’s Saudi Arabia is in unfamiliar territory, with power concentrated in a single individual, Hubbard demonstrates convincingly.  Everyone of consequence, from rich tycoons to the extensive royal Saudi family itself, answers to MBS.  There is little that Saudi Arabia’s old elites can do to counter the upstart Crown Prince.  The collegial days when seniority reigned, elder princes divided portfolios among themselves, and decisions were made through consensus are little more than memories of a by-gone era.  MBS has “destroyed that system” (p.267), Hubbard bluntly concludes.

* * *

Although MBS studied law at university and finished 4th in his class in 2007, at the time of his graduation there was little reason to expect that he would become anything more than, as Hubbard puts it, a “middling prince who dabbled in business and pitched up abroad now and then for a fancy vacation” (p.15).  Unlike many Saudi princes, the young MBS “never ran a company that made a mark.  He never acquired military experience.  He never studied at a foreign university.  He never mastered, or even became functional, in a foreign language.  He never spent significant time in the United States, Europe, or elsewhere in the West” (p.16).

All that changed in January 2015, when his father Salman became Saudi king at age 79.  MBS, 29 years old, was named Minister of Defense and placed in charge of the Royal Court, with a huge role to play in the kingdom’s finances.  Within days, he had reorganized the government, setting up separate supreme councils for economic development and security.  Although little known outside inner Saudi circles at the time, as Minister of Defense MBS was the force behind the Saudi military intervention in neighboring Yemen to suppress an on-going insurgency led by the Houthis, an Islamist group from Northern Yemen whom the Saudis had long considered proxies for Iran.

Touted as a quick and easy military intervention, the conflict in Yemen turned into a stalemate, with humanitarian and refugee crises that continue to this day. The decision to intervene militarily appears to have been that of MBS alone  — a “one man show,” as a Saudi National Guard official told Hubbard, undertaken with no advance consultation, either internally or with the Saudis’ traditional military benefactors in Washington.  The National Guard official told Hubbard that the Saudi intervention was “less about protecting the kingdom than burnishing MBS’ reputation as a tough leader” (p.91).

In April 2017, King Salman appointed MBS’ cousin, the considerably older Mohammed bin Nayef, known as MBN, as Crown Prince, with MBS named “Deputy Crown Prince,” second in line to the throne.  MBN had been the Saudis’ official voice and face in the war on terror, with deep CIA contacts.  The Americans thought he was the perfect “next generation” king.  But MBS had other ideas.  Although the Deputy Crown Prince remained outwardly deferential to his cousin, he appears to have been plotting MBN’s ouster at least from the time his cousin was appointed Crown Prince.  When the plot succeeded in June 2017, with MBS replacing his cousin as Saudi Crown Prince, the official Saudi version was that the appointment was the decision of King Salman alone.

Hubbard tells an altogether different story.  In his account, MBS in effect kidnapped his cousin to force his abdication.  When MBN refused to abdicate, a council friendly to MBS met to formally “ratify” what was presented as a “decision” of the king to make MBS Crown Prince.  Only then did MBN give in, signing a document of abdication.  He was placed under house arrest by guards loyal to MBS and removed of his counterterrorism and security duties, which were “reassigned” to a new security body that reported to MBS.   His bank accounts were frozen and he was stripped of many of his assets.  In March 2020, MBN was arrested on charges of treason and has not been heard from since, held in a location unknown even to his lawyers.

MBS attracted world attention few months later, in November 2017, when he invited many fellow members of the royal family, along with other movers and shakers within the kingdom, to the posh Ritz-Carleton hotel in Riyadh for what was billed as an anti-corruption conference.  Anxious to meet MBS and obtain insider advantages, the attendees eagerly came to Riyadh, only to be all-but-arrested and forcibly detained when they arrived.  The detentions at what was dubbed the world’s most luxurious prison lasted weeks and sometimes months.  By mid-February 2018, most of the detainees had “settled” with the government and were allowed to leave.  The Ritz detainments were what Hubbard describes as a pivot point in MBS’ ascendancy, an “economic earthquake that shook the pillars of the kingdom’s economy and rattled its major figures” (p.200), all of whom thereafter answered to MBS.

Less noticed internationally was a surprise royal decree stripping the Wahhabi religious police of many of their powers.  Henceforth, they could not arrest, question, or pursue subjects except in cooperation with the regular police. The decree, part of an on-going effort to curtail the authority of Saudi Arabia’s ultra-conservative religious establishment, “defanged the clerics,” Hubbard writes, “clearing the way for vast changes [which] they most certainly would have opposed”  (p.63).  The changes involved some wildly popular measures, especially the opening of commercial cinemas and other entertainment venues, such as concerts and opera.  Equally popular was a decree allowing Saudi women to drive.

For decades, activist Saudi women had challenged, often at considerable cost to themselves, a ban on driving that was only a Wahhabi religious dictate, not codified officially in Saudi law (in 2017, I reviewed here the memoir of Manal Al-Sharif, one such activist).  But when MBS saw fit to declare women eligible to drive in June 2018, he did not give any credit to the activist women. They were never thanked publicly or even acknowledged; some were jailed almost simultaneously with the lifting of the ban.

MBS’ grandiose and upbeat plans for modernizing the Saudi economy by shifting away from its oil-dependency found expression in his Vision 2030 document.  Prepared in collaboration with a phalanx of international consultants, Vision 2030 projected that the kingdom would create new industries, rely on renewable energy, and manufacture its own military equipment, all in an effort to “transform itself into a global investment giant, and establish itself as a hub for Europe, Asia, and Africa” (p.67).  MBS presented his plan when he accompanied his father to a meeting in Washington with President Barack Obama, where it was perceived as a slick set of talking points, without much depth.

Vision 2030, Saudi Arabia and MBS all fared better when the administration of Donald Trump replaced the Obama administration in early 2017.  One of the greatest ironies of the Trump era, Hubbard writes, was that Trump, “after demeaning Saudi Arabia and its faith throughout the campaign, would, in the course of a few months, anoint Saudi Arabia a preferred American partner and the lynchpin of his Middle East policy” (p.107).  Saudi-American relations improved in the Trump years in no small part because of the warm if unlikely relationship that MBS struck with the president’s son-in-law, Jared Kushner, two young “princelings,” as Hubbard describes them, “an Arab from central Arabia and a Jew from New Jersey”(p.113).

The two princelings were “both in their thirties and scions of wealthy families who had been chosen by older relatives to wield great power.  They both lacked extensive experience in government, and saw little need to be bound by its strictures” (p.113). Their relationship blossomed because Kushner viewed MBS as someone who could help unlock peace between Israel and Arabs, while MBS expected Kushner to push the United States to champion Vision 2030, stand up to Iran, and support him as he sought to consolidate power.  But the Khassoggi killing in October 2018 temporarily flummoxed even the Trump administration.

Khashoggi had served briefly as one of MBS’s confidantes as the Crown Prince began his rise to power.  Their initial meeting led Khashoggi to believe that MBS was open to openness and had given him a “mandate to write about, and even critique, the prince’s reforms” (p.78).  But as Khasshoggi became a more visible critic of the regime from abroad, mostly in the United States where he was a permanent legal resident and wrote for The Washington Post, the relationship deteriorated.  Hubbard was an associate and friend of Khashoggi and dedicates a substantial portion of the last third of his book to the slain journalist and what we know about his killing.

Hubbard presents a plausible argument that MBS may not have actually ordered the killing  — essentially that MBS’s team was carrying out what they thought the boss wanted, without being explicitly ordered to do so.  Even so, MBS had “fostered the environment in which fifteen government agents and a number of Saudi diplomats believed that butchering a nonviolent writer inside a consulate was the appropriate response to some newspaper columns” (p.280).  The Khassoggi’s killing served as a wake up call for the world.  It “flushed away much of the good will and excitement that MBS had spent the last four years generating”  (p.276).

In the aftermath of the killing, President Trump issued a statement in which he insisted that United States security alliances and massive Saudi purchases of US weaponry were more important than holding top Saudi leadership accountable.  “We do have an ally, and I want to stick with an ally that in many ways has been very good,” Trump was quoted as saying.   After publication of Hubbard’s book, a new administration led by Joe Biden arrived in Washington amidst hopes that the United States would recalibrate its relationship with Saudi Arabia, particularly in light of the known facts about the Khassoggi killing.

* * *

Those hopes increased in February of this year when the Office of the Director of National Intelligence (ODNI) released a two-page summation of its investigation into the killing (the Trump administration had withheld the full report for nearly two years).  The ODNI concluded that MBS had “approved” the Khashoggi killing.  But its  conclusion was derived inferentially rather than from any “smoking gun” evidence it chose to reveal publicly.

The ODNI based its conclusion on MBS’ “control of decision-making in the Kingdom since 2017, the direct involvement of a key adviser and members of Muhammad bin Salman’s protective detail in the operation, and the Crown Prince’s support for using violent measures to silence dissidents abroad, including Khashoggi.” Given MBS’s “absolute control of the Kingdom’s security and intelligence organizations,” the ODNI found it “highly unlikely that Saudi officials would have carried out an operation of this nature without the Crown Prince’s authorization.”

To the disappointment of human rights activists, the Biden administration nonetheless determined that it would impose no direct punishment on MBS.  Sanctioning MBS, according to an anonymous senior official quoted in The Washington Post, would have been viewed in the kingdom as an “enormous insult,” making an ongoing relationship with Saudi Arabia “extremely difficult, if not impossible.”  After having looked at the MBS case extremely closely over the course of about five weeks, the senior official said that the Biden foreign policy team had reached the “unanimous conclusion” that there was “another more effective means to dealing with these issues going forward.”  As US Secretary of State Antony Blinken stated at a public press conference, sounding eerily like former President Trump, the relationship with Saudi Arabia is “bigger than any one individual.”

The Biden administration did identify 76 other Saudi officials subject to sanctions for their presumed roles in the killing.  President Biden also announced the end of US military supplies and intelligence sharing for the Saudi military intervention in Yemen. He has moreover refused to speak directly with MBS, restricting his contact to his father, King Salman.  For the time being, MBS’ Washington contacts as the Saudi defense minister stop at the level of the US Secretary of Defense, Lloyd Austin.

* * *

These protocol decisions will have to be revisited if, as expected, MBS becomes king when his ailing father dies.  One way or another, the United States will need to find a way to deal with a man likely to be a consequential figure on the world stage for decades to come.

Thomas H. Peebles

La Châtaigneraie, France

August 31, 2021

 

 

3 Comments

Filed under American Politics, Biography, Politics

Viewing Responsibility for Human Rights Through a Forward-Looking Lens

 

 

 

Kathryn Sikkink, The Hidden Face of Rights:

Toward a Politics of Responsibilities (Yale University Press, 2020)

Kathryn Sikkink, Professor at the Harvard Kennedy School of Government, is one of the leading academic experts on international human rights law­­—the body of principles arising out of a series of post-World War II human rights treaties, conventions, and other international instruments. Recently, I reviewed her Evidence for Hope: Making Human Rights Work in the 21st Century here.  In that work, Sikkink took on a host of critics of the current state of international human rights law who had challenged both its legitimacy and its effectiveness.  Before Evidence for Hope, she was the author of the highly acclaimed Justice Cascade: How Human Rights Prosecutions Are Changing World Politics, where she argued forcefully for holding individual state officials, including heads of state, accountable for human rights violations.

Now, Sikkink asks us to look at human rights, and especially how we can best implement those rights, through a different lens.  In her most recent work, The Hidden Face of Rights: Toward a Politics of Responsibilities, portions of which were originally delivered as lectures at Yale University’s Program in Ethics, Politics and Economics, Sikkink argues that we need to increase our focus on the duties, obligations, and responsibilities undergirding human rights. Although “duties,” “obligations,” and “responsibilities” are nearly functional equivalents, “responsibilities” is Sikkink’s preferred term. Moreover, Sikkink is concerned with what she terms “forward-looking” rather than “backward-looking” responsibilities.

Forward-looking responsibility turns largely on the development of norms, the voluntary acceptance of mutual responsibilities about appropriate behavior.  It stands in contrast to backward-looking responsibilities, which are based on a “liability model” that asks who is responsible for a violation of human rights and how that person or institution can be held accountable — or responsible.  Sikkink seeks to supplement rather than supplant the liability model, describing it as appropriate in some contexts but not others.  Although necessary, backward-looking responsibilities “cannot address many of the complex, decentralized issues that characterize human rights today” (p.40), she contends.

For Sikkink, forward-looking responsibility is ethical and political, not legal.  She is not arguing to make forward-looking responsibilities legally binding.  Nor is she seeking to create new rights—only to implement existing ones more effectively.  But she uses the term ‘human rights’ broadly, to include the political, civil, economic, and social rights embodied in the major post-war treaties and conventions, along with new rights, such as the right to a clean environment and to freedom from sexual assault.

The crux of Sikkink’s argument is that voluntary acceptance of norms ‑ not fear of sanctions ‑ is in most cases a more effective path to full implementation of human rights.  Sustaining and reinforcing norms entails a pragmatic, “what-might-work” approach, brought about by “networked responsibilities,” one of her key terms, a collective effort in which all those connected to a given injustice — the “agents of justice,” more often private individuals than state actors— step forward to do their share. One of Sikkink’s principal objectives is to bring the theory of human rights into line with existing practice.

Sikkink notes that the activist community charged with implementation of human rights already has “robust practices of responsibility. But it does not yet have explicit norms about the responsibility of non-state actors in implementing human rights” (p.36).  Rights activists are reluctant to talk about responsibilities of non-state actors out of concern that such talk might “take the pressure off the state, risk blaming the victim, underplay the structural causes of injustice, or crowd out other more collective forms of political action” (p.5).  Human rights activists, Sikkink emphasizes, while avoiding recognizing responsibility explicitly, have nonetheless implicitly “assumed responsibility and worked in networks with other agents of justice to bring about change” (p.127).  In this sense, responsibilities are the “hidden face of rights, present in the practices of human rights actors, but something that activists don’t talk about” (p.5).

the first third of the book, Sikkink establishes the theoretical framework to a forward-looking conception of human rights implementation. In the last two-thirds, she applies her forward-looking model to five issues that are close to her heart and home: voting, climate change, sexual assault, digital privacy, and free speech on campus.  Her discussion of these issues is decidedly US-centric, based mostly on how they arise at Harvard and, to a lesser extent, on other American university campuses, with only minimal reference to what a forward-looking approach to implementation of the same rights might entail in other countries.  Among the five issues, voting receives the most extensive treatment, about one-third of the book, as much as the other four topics combined.  Several factors prompted me to question whether voting is the best example of forward-looking responsibility in operation.

* * *

In the voting context, forward-looking responsibility means above all the acceptance of a norm that considers voting a non-negotiable responsibility of citizenship, much like serving jury duty and paying taxes. But we also have a “networked responsibility” to convince others both to accept the voting norm, and to assist them in executing that right.  Sikkink’s discussion zeroes in on how to increase voter turnout among Harvard students and, through focus-group sessions with such students, examines the challenges of persuading them to accept the voting norm.

Sikkink recognizes that Harvard students are far from representative of American university students, let alone of Americans generally.  Although at the pinnacle of privilege in American society, Harvard students, like their peers at other universities, nonetheless under-participate in local and national elections. The difficulties they encounter in registering to vote and casting their ballots are a telling indication that the electoral system is complex for far wider swaths of the American public.  But focusing on them leaves out the consideration of how to reach and persuade less privileged groups.  A few of Stacey Abrams’s insights would have been useful.

Skkink’s book, moreover,  went to press prior to the November 2020 Presidential Election, an election in which approximately 159 million Americans voted — a record turnout, constituting about two-thirds of the eligible electorate and seven whopping percentage points higher than the 2016 turnout. Yet, the election and its aftermath have given rise to unprecedented turmoil, including unsupportable claims of a “stolen” election and an uprising at the U.S. Capitol in January, fundamentally altering the national conversation over voting in the United States from what it was a year ago. Sikkink’s concerns about voter apathy no longer seem quite so central to that conversation.

Rather, more than six months after the election, a substantial minority of the American electorate still adheres to the notion of a “stolen” election, despite overwhelming evidence that the official election results were fully accurate within any reasonable margin of error.  In the aftermath of the election, furthermore, state legislatures in several states have adopted or have under consideration measures that seem designed specifically to discourage some of America’s most vulnerable groups from voting, under the guise of preventing voter fraud — even though evidence of actual fraud in the 2020 election was scant to non-existent.  Sikkink foresees this issue when she notes that state officials in some parts of the United States “do not want to expand voter turnout and even actively suppress it” (p.111).   In such situations, she writes, “networked responsibility of non-state actors to change voting norms and practices is all the more important” (p.111).  If Sikkink were writing today, it seems safe to say that she would elaborate upon this point at greater length.

Unlike some of the rights Sikkink discusses, however, voting to select a country’s leaders is firmly established in written law.  But the responsibility side of this unquestioned right must compete with a plausible claim that in a democratic society based on freedom of choice, a right not to vote should be recognized as a legitimate exercise of that freedom — a way, for instance, of expressing one’s disenchantment with the electoral and political system or, more parochially, dissatisfaction with the candidates offered on the ballot.  Many of the students in the Harvard focus group expressed the view that voting should be “situational and optional” (p.92).  Sikkink emphatically rejects this argument, suggesting at one point that casting a blank ballot is the only responsible way to express such views: “if one is going to refuse to vote in protest, it must be just as hard as voting” (p.121), she writes.

By coincidence, as I was wrestling with Sikkink’s arguments against recognizing a right not to vote in June of this year — and finding myself less than fully convinced — I was following presidential elections in Iran, which witnessed its lowest voter turnout in four decades: slightly less than 50%, with another 14% casting blank ballots.  Dissidents in Iran organized a campaign this year that urged abstention as the most principled way to express opposition to what the campaign leaders maintained was an intractably tyrannical regime.

The abstention campaign argued that the voting process for the election had been structured to eliminate any serious reform candidates; that the Iranian government since 1979 had an extensive track record of voter intimidation and manipulation of vote counting; and that the Iranian government uses the usually high turnout rates (85% for the 2009 presidential election; over 70% in 2013 and 2017) to affirm its own legitimacy. In short, there seemed to be little reason why Iranians could anticipate that the election would be “free and fair,” which may be the necessary predicate to Sikkink’s rejection of a right not to vote, a point she may wish to elaborate upon subsequently (were she writing today, Sikkink might also address the “freedom” not to wear a mask or to be vaccinated during a pandemic; I also wondered how Sikkink would react to regional French elections, which took place immediately after the Iranian election, in which an astounding two-thirds of the electorate abstained).

If Sikkink’s application of forward-looking responsibility to voting contains rough edges, her application to climate change makes for a near perfect fit. While it is obviously of utmost importance to know the underlying causes of climate change and to understand how we reached the current crisis, backward looking responsibility — seeking to hold responsible those who contributed to the crisis — has only limited utility.  Without letting big fossil fuel polluters off the hook for their disproportionate contribution to the current state of affairs, backward looking responsibility “must be combined with forward-looking responsibilities,” Sikkink argues, “including the responsibilities of actors who are not directly to blame” (p.54).  When it comes to climate change, we are all “agents of justice” if we want to preserve a livable planet.

The backward-looking liability model remains critical when applied to the right to be free from sexual assault, a large umbrella category that includes all non-consensual sexual activity or contact, including but not limited to rape.  Any effort to limit sexual assault must “first hold perpetrators responsible—and, where appropriate, criminally accountable” (p.139), Sikkink writes. But we also need to “think about the forward-looking responsibility of multiple agents of justice, especially how potential victims, as capable agents, can take measures to prevent future violence” (p.138).

Digital privacy, Sikkink explains, transcends the interest of individuals to limit the dissemination of their own personal information.  She describes how we can inadvertently expose others to online privacy invasions.  In protecting privacy online, we need to become proficient in what she terms “digital civics,” another term for the forward-looking responsibility of Internet users to help ensure both their own privacy rights and those of other users.

A separate but related aspect of digital civics is learning how to recognize and not spread disinformation, “fake news,” thereby raising questions about the bounds of the right to free speech online.  We all have an ethical and political responsibility, if not quite a legal one, to evaluate sources and to refrain from sharing (or “liking”) information that does not appear to have sound factual grounding, Sikkink argues. The extent of the bounds of free speech also arises on campus in finding a balance between the right to speak itself, and the right to protest speech that one finds offensive.

On university campuses today, many students feel they have an obligation to defend fellow students, and oppressed people generally, against hurtful and degrading speech. Sikkink notes that over half the students responding to one survey thought it was acceptable to shout at speakers making what they perceived to be offensive statements, while 19% said it was acceptable to use violence to prevent what is perceived to be abusive speech. These are not responsible exercises of one’s right to protest offensive speech, Sikkink responds.  Violence and drowning out the speech of others are more than just “problematic from the point of view of the ethic of responsibility” (p.136).  Pragmatically, these forms of protest have been demonstrated to be unlikely to generate support for the ideas espoused by those using such tactics.

* * *

Pragmatism thoroughly infuses Sikkink’s notion of forward-looking responsibility, as applied not only to campus speech and the other rights discussed here but, presumptively, to the full range of recognized human rights.   Her pragmatism animates the question she closes the book with, literally her bottom line: in addition to — or even instead of — asking who is to blame, we should ask: “What together we can do?” (p.148).  As her fellow academic theorists evaluate the fresh perspective that Sikkink brings to international human rights in this compact but thought-provoking volume, they will want to weigh in on the pertinence of this question to our understanding of those rights.

 

Thomas H. Peebles

Caen, France

August 21, 2021

[NOTE: A nearly identical version of this review has also been posted to the Tocqueville 21 blog, maintained in connection with the American University of Paris’ Tocqueville Review and its Center for Critical Democracy Studies]

 

 

 

3 Comments

Filed under Political Theory, Rule of Law

Deciphering a Confounding Thinker

 

 

Robert Zaretsky, The Subversive Simone Weil:

A Life in Five Ideas (University of Chicago Press)

 

Simone Weil is considered today among the foremost twentieth-century French intellectuals, on par with her luminous contemporaries Simone de Beauvoir, Jean-Paul Sartre, and Albert Camus. And yet she was not widely known when she died at age 34 in 1943. Although she wrote profusely, only small portions of her writings were published during her lifetime. Much of her written work was left in private notebooks and published posthumously. It was only after the Second World War, as Weil’s writings increasingly came to light, that a comprehensive picture of her thinking emerged —comprehensive without necessarily being coherent. In The Subversive Simone Weil: A Life in Five Ideas, Robert Zaretsky attempts to provide this coherence.

Indeed, Weil was a confounding thinker whose body of thought and the life she lived seem awash in contradictions. As Zaretsky notes at the outsetWeil was:

an anarchist who espoused conservative ideals, a pacifist who fought in the Spanish Civil War, a saint who refused baptism, a mystic who was a labor militant, a French Jew who was buried in the Catholic section of an English cemetery, a teacher who dismissed the importance of solving a problem, [and] the most willful of individuals who advocated the extinction of the self (p.2).

 Zaretsky, a professor at the University of Houston and one of the Anglophone world’s most fluent writers on French intellectual and cultural history, aims not so much to dispel these contradictions as to distill Weil’s intellectual legacy, contradictions and all, into five core ideas encapsulating the body of political, social, and theological thought she left behind. These five ideas are: affliction, attention, resistance, rootedness, and goodness—each the object of a separate chapter.

Unsurprisingly, these five Weilian ideas are far more intricate and multi-faceted than the single words suggest, and they are inter-related, with what Zaretsky terms “blurred borders” (p.14).  Moreover, the five ideas are presented in approximate chronological order: the first three chapters on affliction, attention, and resistance concern mostly Weil in the 1930s; while the last two on rootedness and goodness primarily cover her wartime years from 1940 to 1943—her most productive literary period.

Each chapter can be read as a standalone essay, and Zaretsky would likely discourage us from searching too eagerly for threads that unite the five into an overarching narrative. But there is one connecting thread which provides context for the apparent contradictions in Weil’s life and thought: collectively, the five ideas tell the story of Weil’s transformation from an exceptionally empathetic yet otherwise conventional 1930s non-communist, left-wing intellectual—Jewish and secular—to someone who in her final years found commonality with conservative political and social thought, embraced Catholicism and Christianity, and was profoundly influenced by religious mysticism. Although not intended as a biography in the conventional sense, The Subversive Simone Weil begins with a short but helpful overview of Weil’s abbreviated life before plunging into her five ideas.

* * *

Weil was born in 1909 and brought up in a progressive, militantly secular bourgeois Jewish family in Paris. Her older brother André became one of the twentieth century’s most accomplished mathematicians. She graduated in 1931 from France’s renowned École Normale Supérieure, the same school that had accorded diplomas to Jean-Paul Sartre and Raymond Aron a few years earlier.  After ENS, she took three secondary teaching positions in provincial France, and also managed to find her way to local factories, where she taught workers in evening classes and with limited success did some of the hard factory work herself.

In 1936, Weil joined the Republican side in the Spanish Civil War, and was briefly involved in combat operations before she inadvertently stepped into a vat of boiling cooking oil, severely injuring her foot. After she returned to France to allow her injury to heal, she had three seemingly genuine mystical religious experiences that set in motion what Zaretsky characterizes as rehearsals for her “slow and never quite completed embrace of Roman Catholicism” (p.134).  When Nazi Germany invaded France in 1940, Weil and her parents caught the last train out of Paris for Marseille, where they stayed for almost two years before leaving for New York. While in Marseille, Weil was deeply influenced by Joseph-Marie Perrin, a nearly blind Dominican priest, and came close but stopped short of a formal conversion to Catholicism.

Weil left her parents in New York for London, where she joined Charles de Gaulle’s government-in-exile, with ambitions that never materialized to return to France to battle the Nazis directly. While in London, her primary responsibility was to work on reports detailing a vision for a liberated and republican France. Physically frail most of her life, Weil suffered from migraines, and may have been on a hunger strike when she died of complications from tuberculosis in 1943, in a sanatorium south-east of London.

* * *

Malheur was Weil’s French term for “affliction.” This is the first of the five ideas that Zaretsky distills from Weil’s life and thought, in which we see Weil at her most political. Her idea of affliction appears to have arisen principally from her experiences working in factories early in her professional career.  Yet, affliction for Weil was the condition not just of factory workers, but of nearly all human beings in modern, industrial society—the “unavoidable consequence of a world governed by forces largely beyond our comprehension, not to mention our control” (p.36).  Affliction was “ground zero of human misery” (p.36), entailing psychological degradation as much as physical suffering.

The early Weil was attracted politically to anarcho-syndicalism, a movement that urged direct action by workers as the means to achieve power in depression-riddled 1930s France, with direct democracy of worker co-operatives as its end. In these years, Weil was an “isolated voice on the left who denounced communism with the same vehemence as she did fascism” (p.32), Zaretsky writes, comparing her to George Orwell and Albert Camus. With what Zaretsky describes as “stunning prescience” (p.32), she foresaw the foreboding consequences of totalitarianism emerging both in Stalin’s Russia and Hitler’s Germany.

Attention, sometimes considered Weil’s central ethical concept, involves how we see the world and others in it. But it is an elusive concept, “supremely difficult to grasp”  (p.46).  Attention was attente in French: waiting, which requires the canceling of our desires.  Attention takes place in what Zaretsky terms the world’s salle d’attente, its waiting room, where we “forget our own itinerary and open ourselves to the itineraries of others” (p.54).  Zaretsky sees the idea of attention at work in Weil’s approach to teaching secondary school students, where her emphasis was on identifying problems rather than finding solutions. She seemed to be telling her students that it’s the going there, not getting there, that counts. Although not discussed by Zaretsky, there are echoes of Martin Buber’s “I-Thou” relationship in Weil’s notion of attention.

Zaretsky refrains from terming the Spanish Civil War a turning point for Weil, but it seems to have been just that.  Her brief experience in the war, combined with a growing realization of the existential threat which the Nazis and their fascist allies posed to European civilization, prompted her to revise her earlier commitment to pacifism. This is one consequence of resistance—Zaretsky’s third idea — which aligned Weil with the ancient Stoics and Epicureans, who taught their followers to resist recklessness, panic and passion. For Weil, resistance was an affirmation that the “truly free individual is one who takes the world as it is and aligns with it as best they can” (p.64), as Zaretsky puts it. Weil’s Spanish Civil War experience also gave rise to a growing conviction that “politics alone could not fully grasp the human condition” (p.133).

Rootedness—the fourth idea—arises out of Weil’s visceral sense of having been torn from her native France.  Déracinement, uprooting, was the founding sentiment for The Need for Roots, her final work, in which she emphasized how the persistence of a people is tied to the persistence of its culture—a community’s “deeply engrained way of life, which bends but is not broken as it carries across generations” (p.99).  Rootedness takes place in a “finite and flawed community” and became for Weil the “basis for a moral and intellectual life.” A community’s ties to the past “must be protected for the very same reason that a tree’s roots in the earth must be protected: once those roots are torn up, death follows” (p.126).

There is no evidence that Weil read either the Irish Whig Edmund Burke or the German Romantic Johann Herder, leading conservatives of the late eighteenth and early nineteenth centuries.  Nonetheless, Zaretsky finds considerable resonance between Weil’s sense of rootedness and Burke’s searing critique of the French Revolution, as well as Herder’s rejection of the universalism of the Enlightenment in favor of preserving local and linguistic communities.  Closer to her own time, Weil’s views on community aligned surprisingly with those of Maurice Barrès and Charles Maurras, two leading early twentieth-century French conservatives whose works turned on the need for roots. Zaretsky also finds commonalities between Weil and today’s communitarians, who reject the individualism of John Rawls.

But Weil also applied her views on rootedness to French colonialism, putting her at odds with her wartime boss in London, Charles de Gaulle, who was intent upon preserving the French Empire.  She perceived no meaningful difference between what the Nazis had done to her country—invaded and conquered—and what the French were doing in their overseas colonies.  Weil was appalled by the notion of a mission civilisatrice, a civilizing mission underlying France’s exertion of power overseas. It was essential for Weil that the war against Germany “not obscure the brute fact of French colonization of other peoples” (p.111).  Although Weil developed her idea of rootedness in the context of forced deportations brought about by Nazi conquests, she recognized that rootlessness can occur without ever moving or being moved. Drawing upon her idea of affliction, Weil linked this form of uprooting to capitalism and what the nineteenth-century English commentator Thomas Carlyle termed capitalism’s “cash nexus.”

Zaretsky’s final chapter on Goodness addresses what he terms Weil’s “brilliant and often bruising dialogue with Christianity” (p.134), the extension of her three mystical experiences in the late 1930s.  The battle was bruising, Zaretsky indicates, because as a one-time secular Jew Weil’s desire to surrender wholly to the Church’s faith ran up against her indignation at much of its history and dogma.  “Appalled by a religion with universal claims that does not allow for the salvation of all humankind,” Weil “refused to separate herself from the fate of unbelievers. Anathema sit, the Church’s sentence of banishment against heretics filled Weil with horror” (p.135).  Yet, in her final years, Catholicism became the “substance and scaffolding of her worldview” (p.34), Zaretsky writes.

But Zaretsky’s emphasis is less on Weil’s theological views than on how she found her intellectual bridge to Christianity through the ancient Greeks, especially the thought of Plato.  Ancient Greek poetry, art, philosophy and science all manifested the Greek search for divine perfection, or what Plato termed “the Good.”  For Weil, faith appears to have been the pursuit of Plato’s Good by other means. The Irish philosopher and novelist Iris Murdoch, who helped introduce Weil to a generation of British readers in the 1950s and 1960s, explained that Weil’s tilt toward Christianity amounted to dropping one “o” from the Good.

* * *

Simone Weil was a daunting figure, intimidating perhaps even to Zaretsky, who avers that her ability to plumb the human condition “runs so deep that it risks losing those of us who remain near the surface of things” (p.38).  Zaretsky, however, takes his readers well below the surface of her body of thought in this eloquent work, producing a comprehensible structure for understanding an enigmatic thinker. His work should hold the interest of readers already familiar with Weil and those encountering her for the first time.

Thomas H. Peebles

La Châtaigneraie, France

July 31, 2021

[NOTE: A nearly identical version of this review has also been posted to the Tocqueville 21 blog, maintained in connection with the American University of Paris’ Tocqueville Review and its Center for Critical Democracy Studies]

 

 

3 Comments

Filed under French History, Intellectual History, Political Theory, Religion