American Polarizer




James Shapiro, Shakespeare in a Divided America:

What His Plays Tell Us About Our Past and Our Future

(Penguin Press, 2020)

In June 2017, New York City’s Public Theater staged a production in Central Park of William Shakespeare’s Julius Caesar, directed by Oskar Eustis, as part of the series known as Shakespeare in the Park.  As in many 21st century Shakespeare productions, non-whites had several leading roles and women played men’s parts.  Eustis’ Caesar, knifed to death in Act III, bore more than passing resemblance to President Donald J. Trump: he had strange blond hair, wore overly long red ties, tweeted from a golden bathtub, and had a wife with a Slavic accent.

A protestor interrupted one of the early productions, jumping on stage after the assassination of Caesar to shout, “This is violence against Donald Trump,” according to The New York Times.  Breitbart News picked up on the story with the headline “’’Trump’ stabbed to death.”  Fox News weighed in, expressing concern that the play encouraged violence against the president.  Corporate sponsors pulled out.  Threats were levied not only against the Public Theater and its actors, but also against other Shakespeare productions throughout the country.  A fierce but unedifying battle was fought on social media, with little regard for the ambiguities underlying Caesar’s assassination in the play.

The polemic engendered by Eustis’ Julius Caesar unsettled Columbia University Professor James Shapiro, one of academia’s foremost Shakespeare experts.  Shapiro also serves as Shakespeare Scholar in Residence at the Public Theater and in that capacity had advised Eustis’ team on some of the play’s textual issues. His most recent work, Shakespeare in a Divided America: What His Plays Tell Us About Our Past and Our Future, constitutes his response to the polemic, in which he demonstrates convincingly that the frenzied reaction to the 2017 Julius Caesar performance was no aberrational moment in American history.

Starting and finishing with the 2017 performance, Shapiro identifies seven other historical episodes in which a Shakespeare play has been enmeshed in the nation’s most divisive issues: racism, slavery, class conflict, nationalism, immigration, the role of women, adultery and same sex love.  Each episode constitutes a separate chapter with a specific year. Shapiro dives deeply and vividly into the circumstances surrounding all seven, revealing a flair for writing and recounting American history that rivals what he brings to his day job as an interpreter of Shakespeare, his plays and his age.  Of the seven episodes, the most gripping is his description of the 1849 riot at New York City’s upscale Astor Place Opera House, one of the worst in the city’s history up to that point.  By comparison, the 2017 brouhaha over Julius Caesar seems like a Columbia graduate school seminar on Shakespeare.

* * *

Fueled by raw class conflict, nationalism and anti-British sentiment, the Astor Place riot was described in one newspaper as the “most sanguinary and cruel [massacre] that has ever occurred in this country,” an episode of “wholesale slaughter” (p.49)— all arising out of competing versions of Macbeth, starring competing actors.  The Briton William Macready, performing as Macbeth at Astor Place, and the American Edwin Forrest, simultaneously rendering Macbeth at the Bowery Theatre, only a few blocks away but in a decidedly rougher part of town, offered opposing approaches to playing Macbeth that seemed to highlight national differences between the United States and Great Britain: Forrest, the “brash American, Macready the sensitive Englishman” (p.66).  Macready’s “accent, gentle manliness, and propriety represented a world that was being overtaken by everything that Forrest, guiding spirit of the new and for many coarser age of Manifest Destiny, represented”  (p.66), Shapiro writes.

Shapiro’s description of the riot underscores how theatres in a rapidly growing New York City in the 1840s were democratic meeting points.  They were  “one of the few places in town where classes and races and sexes, if they did not exactly mingle, at least shared a common space. This meant, in practice, that the inexpensive benches in the pit were filled mostly by the working class, the pricier boxes and galleries were occupied by wealthier patrons, and in the tiers above, space was reserved for African Americans and prostitutes” (p.56).  The Astor Place Opera House, built in 1847, was an explicit response of New York’s upper crust to these democratizing tendencies. It did not admit unaccompanied women – there was no place for prostitutes – and it imposed a dress code.  The new rules were seen as fundamentally undemocratic, especially to the city’s large number of recent German and Irish immigrants.

While Forrest opened at the Bowery, Forrest fans somehow obtained tickets to the opening Astor Place performance—who paid for them, Shapiro indicates, remains a mystery—and began heckling Macready, telling him to get off the stage, “you English fool.”  Three days later, the heckling recurred.  But this time a crowd of about 10,000 had gathered outside, an unruly mix of Irish immigrants and native-born Americans, groups that had common cause in anti-English and anti-aristocratic sentiment (many of the Irish immigrants were escaping the Irish potato famine of the mid-1840s, often attributed to harsh British policies; see my 2014 review here of John Kelly’s The Graves Are Walking: The Great Famine and the Saga of the Irish People).  Incited by political leaders and their cronies, the crowd began to throw bricks and stones. They fought a battle with police that continued for several days, with dozens of deaths on both sides.

There were “no winners in the Astor Place riots,” Shapiro writes. The mayhem “brought into sharp relief the growing problem of income inequality in an America that preferred the fiction that it was still a classless society” (p.76).  But the riots also spoke to an “intense desire by the middle and lower classes to continue sharing the public space [of the theatre], and to oppose, violently if necessary, efforts to exclude them from it.  Shakespeare continued to matter and would remain common cultural property in America” (p.78).

In two other powerful chapters, Shapiro demonstrates how Shakespeare’s plays also intertwined with mid-19thcentury America’s excruciating attempts to come to terms with racism and slavery.  One examines abolitionist former president John Quincy Adams’ public feud in the 1830s over what he considered the abominable inter-racial relationship Shakespeare depicts in Othello between Desdemona and the dark-skinned Othello.  In the second, Shapiro shows how, in a twist that was itself Shakespearean, fate linked President Abraham Lincoln, a man who loved Shakespeare and identified with Macbeth, to his assassin, second-rate Shakespearean actor John Wilkes Booth, himself obsessed with both Julius Caesar and what he perceived as Lincoln’s efforts to undermine the supremacy of the white race.

John Quincy Adams, who served as president from 1825 to 1829, found Desdemona’s physical intimacy with Othello, known at the time as “amalgamation” (“miscegenation” did not enter the national vocabulary until the 1860s), to be an “unnatural passion” against the laws of nature.  Adams’ views might have gone largely unnoticed but for a dinner party in 1833, in which the 66 year old former president was seated next to 23 year old Fanny Kemble, a rising young Shakespearean actress from England.  Adams apparently thrust his views of the Othello-Desdemona relationship upon the unsuspecting Kemble.

Two years later, Kemble published a journal about her trip to the United States, in which she described her dinner conversation with the former president.  A piqued Adams felt compelled to respond, elaborating in print about how repellent he found the Desdemona-Othello relationship. The dinner conversation of two years earlier between the ex-president and the rising British actress thus became national news and, with it, Adams’ anxieties about not only the dangers of race-mixing but also the threat posed by disobedient women.

Yet, the ex-president who was so firmly against amalgamation was also a firm abolitionist.  Adams’ abolitionist convictions, Shapiro writes, “seem to have required a counterweight, and he found it in this repudiation of amalgamation” (p.20).  By directing his hostility at Desdemona rather than Othello, moreover, Adams astutely sidestepped criticizing black men, and it “proved more convenient to attack a headstrong young fictional woman than a living one” (p.20).  Although a prolific writer, Adams’ public feud with Kemble represented his sole written attempt to square his disgust for interracial marriage with his abolitionist convictions, and he chose to do so “only through his reflections on Shakespeare” (p.20).

Abraham Lincoln, from humble frontier origins with almost no formal schooling, developed a life-long passion for Shakespeare as a youth.  Shapiro notes that the adult Lincoln regularly asked friends, family, government employees, and relative strangers to listen to him recite, sometimes for hours on end – and then discuss – the same few passages from Shakespeare again and again.  John Wilkes Booth too grew up with Shakespeare, but in altogether different circumstances.

Booth’s father owned a farm in rural Maryland but was also a leading English Shakespearean actor who immigrated to the United States and became a major figure on the American stage.  His three sons followed in their father’s footsteps, with older brothers Edwin and Julius attaining genuine star status, a status that eluded their younger brother John.  Although Maryland was a border state that did not join the Confederacy, John, who had been convinced from his earliest years that whites were superior to blacks, was naturally drawn to the Southern cause.

In 1864, both the year of Lincoln’s re-election and the 300th anniversary of Shakespeare’s birth, Booth was stalking Lincoln and plotting his removal with Confederate operatives.  Lincoln, who had less than six months to live when he was re-elected in November, found himself brooding more and more about Macbeth in his final months, and especially about the murdered King Duncan.  Through his reflection upon the guilt-ridden Macbeth, Shapiro writes, Lincoln felt the “deep connection between the nation’s own primal sin, slavery, and the terrible cost, both collective and personal, exacted by it” (p.113)

After Booth assassinated Lincoln at Ford’s Theater in Washington in April 1865, many of Lincoln’s enemies likened the assassin, whose favorite play was Julius Caesar, to Brutus as a man who killed a tyrant.  But Macbeth proved to be the play that the nation settled on to “give voice to what happened, and define how Lincoln was to be  remembered”(p.116).  Booth had “failed to anticipate that the man he cold-bloodedly murdered would be revered like Duncan, his faults forgotten” (p.118).  For a divided America, the universal currency of Shakespeare’s words offered what Shapiro terms a “collective catharsis” which permitted a “blood-soaked nation to defer confronting once again what Booth declared had driven him to action: the conviction that American ‘was formed for the white not for the black man’” (p.118).

The year 1916 was the 300th anniversary of Shakespeare’s death, a year in which one of his least known plays, The Tempest, was used to bolster the case for anti-immigration legislation. The Tempest centers on Caliban, who is left behind, rather than on those who immigrate.  But the point is the same, Shapiro argues: a “more hopeful community . . . depends on somebody’s exclusion” (p.125).  This notion resonated in particular with Massachusetts Senator Henry Cabot Lodge, an avid Shakespeare reader who led the early 20th century anti-immigration campaign.

The unusual number of performances of The Tempest during that tercentenary year meshed with the fierce debate that Lodge led in Congress over immigration.  The legislation that passed the following year curtailed the influx into the United States of immigrants representing “lesser races,” most frequently a reference to Southern and Eastern Europeans. “How Shakespeare and especially The Tempest were conscripted by those opposed to the immigration of those deemed undesirable is a lesser known part of this [immigration] story” (p.124), Shapiro writes.

Closer to the present, Shapiro has chapters on the 1948 Broadway musical, play, Kiss Me, Kate, later a film, about the cast of Shakespeare’s The Taming of the Shrew, which raised the issue of the roles of women in a post-war society; and on the 1998 film Shakespeare in Love, by far the most successful film to date about Shakespeare or any of his plays, which began as a film about same-sex love but evolved into one about adultery.

Kiss Me, Kate takes place at the backstage of a performance of The Taming of the Shrew.  With music and lyrics provided by Cole Porter, the Broadway musical contrasted the emerging, post-World War II view of the role of women with the conventional stereotyped gender roles in the Shakespeare play itself, thereby featuring “rival visions of the choices women faced in postwar America” (p.160).  In Shakespeare’s play, “women are urged to capitulate and their obedience to men is the norm,” while backstage “independence and unconventionality hold sway” (p.160).  Kiss Me, Kate deftly juxtaposed a “front stage Shakespeare world that mirrored the fantasy of a patriarchal, all-white America” with a backstage one that was “forthright about a woman’s say over her desires and her career” (p.162).

In the earliest version of the film Shakespeare in Love in 1992, Will found himself attracted to the idea of same sex attraction (he was actually attracted to a woman dressed as a man, but the point was that Will thought she was a he).  But same sex love was reduced to a mere hint in the final version, about how the unhappily married Will’s affair with another woman, Viola, helped him overcome his writer’s block, finish Romeo and Juliet, and go on to greatness.  Those creating and marketing Shakespeare in Love, Shapiro writes, “clearly felt that a gay or bisexual Shakespeare was not something that enough Americans in the late 1990s were ready to accept” (p.194).  For box-office success, “Shakespeare could be an adulterer, but he had to be a heterosexual one in a loveless marriage” (p.194).

Shakespeare in Love ends with Viola leaving Will and England for America, reinforcing a myth that persisted from the 1860s through the 1990s of a direct American connection to Shakespeare  — anti-immigration Senator Lodge was one of its most exuberant proponents.  This fantasy, Shapiro writes, speaks to our desire to “forge a physical connection between Shakespeare and America” as the land where his “inspiring legacy came to rest and truly thrived” (p. 193).

* * *

While finding no credible evidence for a direct American  connection to Shakespeare, Shapiro sees a legacy in Shakespeare’s plays that should inspire Americans of all hues and stripes.  Pained by the polarization he witnessed at the 2017 Julius Caesar performance, Shapiro expresses the hope that his book might “shed light on how we have arrived at our present moment, and how, in turn, we may better address that which divides and impedes us as a nation” (p.xxix).  The hope seems forlorn in light of the examples he so brilliantly details, pointing mostly in the other direction: a Shakespeare on the cutting edge of America’s social and political divisions, with his plays often doing the cutting.

Thomas H. Peebles

Paris, France

September 19, 2021

[NOTE: A nearly identical version of this review has also been posted to the Tocqueville 21 Blog, maintained in connection with the American University of Paris’ Tocqueville Review and its Center for Critical Democracy Studies]





Filed under American Politics, American Society, Literature, Politics, United States History

Alarming Portrait of a Ruthlessly Ambitious Crown Prince



Ben Hubbard, MBS: The Rise to Power of Mohammed Bin Salman

(Tim Dugan Books)

Mohammed Bin Salman, better known by his initials, MBS, is today the Crown Prince of Saudi Arabia and seems poised to become Saudi King upon the death of his ailing father, 86 year old Salman bin Abdulaziz.  Still youthful at age 36, MBS has achieved what appears to be unchallenged power within the mysterious desert kingdom, the birthplace of Islam and the location of its two most holy sites.   Internationally, MBS is indelibly associated with the gruesome October 2018 murder of Saudi journalist Jamal Khashoggi, a murder he probably ordered, but if not almost certainly enabled.  Even apart from the Khashoggi killing, the Saudi Crown Prince has compiled a record that is awash in contradictions since his ascent to power began in 2015.

MBS seems bent on modernizing and diversifying the oil-dependent Saudi economy. He has taken highly publicized steps against corruption; clipped the wings of the clergy and religious police; and accorded Saudi women the right to drive.  Young Saudis appreciate that MBS is largely responsible for movie theatres opening and rock concerts now taking place in their country.  But MBS’s record is also one of brutal suppression of opponents, potential opponents and dissidents  – brutal even by Saudi standards.  His regime seems to be borrowing from the authoritarian Chinese model of extensive economic modernization, accompanied by limited and tightly controlled social liberalization, all without feigning even nominal interest in political democratization.  Saudi Arabia under MBS remains, like China, one of the world’s least democratic societies.

In MBS: The Rise to Power of Mohammed Bin Salman, Ben Hubbard, a journalist for The New York Times with extensive experience in Saudi Arabia and the Middle East, has produced the first — and to date only — biography of the Saudi Crown Prince available in English.  Any biography of MBS is bound to be incomplete, given the wall of secrecy MBS has built up around himself, shielding much of the detail of what he has done and how he operates within the generally secretive royal Saudi circles.  But somehow Hubbard managed to scale that wall.  Using a wide array of sources, many anonymous, he has pieced together a remarkably easy-to-read yet riveting and alarming portrait of a man who has eliminated all apparent sources of competition.

Today’s Saudi Arabia is in unfamiliar territory, with power concentrated in a single individual, Hubbard demonstrates convincingly.  Everyone of consequence, from rich tycoons to the extensive royal Saudi family itself, answers to MBS.  There is little that Saudi Arabia’s old elites can do to counter the upstart Crown Prince.  The collegial days when seniority reigned, elder princes divided portfolios among themselves, and decisions were made through consensus are little more than memories of a by-gone era.  MBS has “destroyed that system” (p.267), Hubbard bluntly concludes.

* * *

Although MBS studied law at university and finished 4th in his class in 2007, at the time of his graduation there was little reason to expect that he would become anything more than, as Hubbard puts it, a “middling prince who dabbled in business and pitched up abroad now and then for a fancy vacation” (p.15).  Unlike many Saudi princes, the young MBS “never ran a company that made a mark.  He never acquired military experience.  He never studied at a foreign university.  He never mastered, or even became functional, in a foreign language.  He never spent significant time in the United States, Europe, or elsewhere in the West” (p.16).

All that changed in January 2015, when his father Salman became Saudi king at age 79.  MBS, 29 years old, was named Minister of Defense and placed in charge of the Royal Court, with a huge role to play in the kingdom’s finances.  Within days, he had reorganized the government, setting up separate supreme councils for economic development and security.  Although little known outside inner Saudi circles at the time, as Minister of Defense MBS was the force behind the Saudi military intervention in neighboring Yemen to suppress an on-going insurgency led by the Houthis, an Islamist group from Northern Yemen whom the Saudis had long considered proxies for Iran.

Touted as a quick and easy military intervention, the conflict in Yemen turned into a stalemate, with humanitarian and refugee crises that continue to this day. The decision to intervene militarily appears to have been that of MBS alone  — a “one man show,” as a Saudi National Guard official told Hubbard, undertaken with no advance consultation, either internally or with the Saudis’ traditional military benefactors in Washington.  The National Guard official told Hubbard that the Saudi intervention was “less about protecting the kingdom than burnishing MBS’ reputation as a tough leader” (p.91).

In April 2017, King Salman appointed MBS’ cousin, the considerably older Mohammed bin Nayef, known as MBN, as Crown Prince, with MBS named “Deputy Crown Prince,” second in line to the throne.  MBN had been the Saudis’ official voice and face in the war on terror, with deep CIA contacts.  The Americans thought he was the perfect “next generation” king.  But MBS had other ideas.  Although the Deputy Crown Prince remained outwardly deferential to his cousin, he appears to have been plotting MBN’s ouster at least from the time his cousin was appointed Crown Prince.  When the plot succeeded in June 2017, with MBS replacing his cousin as Saudi Crown Prince, the official Saudi version was that the appointment was the decision of King Salman alone.

Hubbard tells an altogether different story.  In his account, MBS in effect kidnapped his cousin to force his abdication.  When MBN refused to abdicate, a council friendly to MBS met to formally “ratify” what was presented as a “decision” of the king to make MBS Crown Prince.  Only then did MBN give in, signing a document of abdication.  He was placed under house arrest by guards loyal to MBS and removed of his counterterrorism and security duties, which were “reassigned” to a new security body that reported to MBS.   His bank accounts were frozen and he was stripped of many of his assets.  In March 2020, MBN was arrested on charges of treason and has not been heard from since, held in a location unknown even to his lawyers.

MBS attracted world attention few months later, in November 2017, when he invited many fellow members of the royal family, along with other movers and shakers within the kingdom, to the posh Ritz-Carleton hotel in Riyadh for what was billed as an anti-corruption conference.  Anxious to meet MBS and obtain insider advantages, the attendees eagerly came to Riyadh, only to be all-but-arrested and forcibly detained when they arrived.  The detentions at what was dubbed the world’s most luxurious prison lasted weeks and sometimes months.  By mid-February 2018, most of the detainees had “settled” with the government and were allowed to leave.  The Ritz detainments were what Hubbard describes as a pivot point in MBS’ ascendancy, an “economic earthquake that shook the pillars of the kingdom’s economy and rattled its major figures” (p.200), all of whom thereafter answered to MBS.

Less noticed internationally was a surprise royal decree stripping the Wahhabi religious police of many of their powers.  Henceforth, they could not arrest, question, or pursue subjects except in cooperation with the regular police. The decree, part of an on-going effort to curtail the authority of Saudi Arabia’s ultra-conservative religious establishment, “defanged the clerics,” Hubbard writes, “clearing the way for vast changes [which] they most certainly would have opposed”  (p.63).  The changes involved some wildly popular measures, especially the opening of commercial cinemas and other entertainment venues, such as concerts and opera.  Equally popular was a decree allowing Saudi women to drive.

For decades, activist Saudi women had challenged, often at considerable cost to themselves, a ban on driving that was only a Wahhabi religious dictate, not codified officially in Saudi law (in 2017, I reviewed here the memoir of Manal Al-Sharif, one such activist).  But when MBS saw fit to declare women eligible to drive in June 2018, he did not give any credit to the activist women. They were never thanked publicly or even acknowledged; some were jailed almost simultaneously with the lifting of the ban.

MBS’ grandiose and upbeat plans for modernizing the Saudi economy by shifting away from its oil-dependency found expression in his Vision 2030 document.  Prepared in collaboration with a phalanx of international consultants, Vision 2030 projected that the kingdom would create new industries, rely on renewable energy, and manufacture its own military equipment, all in an effort to “transform itself into a global investment giant, and establish itself as a hub for Europe, Asia, and Africa” (p.67).  MBS presented his plan when he accompanied his father to a meeting in Washington with President Barack Obama, where it was perceived as a slick set of talking points, without much depth.

Vision 2030, Saudi Arabia and MBS all fared better when the administration of Donald Trump replaced the Obama administration in early 2017.  One of the greatest ironies of the Trump era, Hubbard writes, was that Trump, “after demeaning Saudi Arabia and its faith throughout the campaign, would, in the course of a few months, anoint Saudi Arabia a preferred American partner and the lynchpin of his Middle East policy” (p.107).  Saudi-American relations improved in the Trump years in no small part because of the warm if unlikely relationship that MBS struck with the president’s son-in-law, Jared Kushner, two young “princelings,” as Hubbard describes them, “an Arab from central Arabia and a Jew from New Jersey”(p.113).

The two princelings were “both in their thirties and scions of wealthy families who had been chosen by older relatives to wield great power.  They both lacked extensive experience in government, and saw little need to be bound by its strictures” (p.113). Their relationship blossomed because Kushner viewed MBS as someone who could help unlock peace between Israel and Arabs, while MBS expected Kushner to push the United States to champion Vision 2030, stand up to Iran, and support him as he sought to consolidate power.  But the Khassoggi killing in October 2018 temporarily flummoxed even the Trump administration.

Khashoggi had served briefly as one of MBS’s confidantes as the Crown Prince began his rise to power.  Their initial meeting led Khashoggi to believe that MBS was open to openness and had given him a “mandate to write about, and even critique, the prince’s reforms” (p.78).  But as Khasshoggi became a more visible critic of the regime from abroad, mostly in the United States where he was a permanent legal resident and wrote for The Washington Post, the relationship deteriorated.  Hubbard was an associate and friend of Khashoggi and dedicates a substantial portion of the last third of his book to the slain journalist and what we know about his killing.

Hubbard presents a plausible argument that MBS may not have actually ordered the killing  — essentially that MBS’s team was carrying out what they thought the boss wanted, without being explicitly ordered to do so.  Even so, MBS had “fostered the environment in which fifteen government agents and a number of Saudi diplomats believed that butchering a nonviolent writer inside a consulate was the appropriate response to some newspaper columns” (p.280).  The Khassoggi’s killing served as a wake up call for the world.  It “flushed away much of the good will and excitement that MBS had spent the last four years generating”  (p.276).

In the aftermath of the killing, President Trump issued a statement in which he insisted that United States security alliances and massive Saudi purchases of US weaponry were more important than holding top Saudi leadership accountable.  “We do have an ally, and I want to stick with an ally that in many ways has been very good,” Trump was quoted as saying.   After publication of Hubbard’s book, a new administration led by Joe Biden arrived in Washington amidst hopes that the United States would recalibrate its relationship with Saudi Arabia, particularly in light of the known facts about the Khassoggi killing.

* * *

Those hopes increased in February of this year when the Office of the Director of National Intelligence (ODNI) released a two-page summation of its investigation into the killing (the Trump administration had withheld the full report for nearly two years).  The ODNI concluded that MBS had “approved” the Khashoggi killing.  But its  conclusion was derived inferentially rather than from any “smoking gun” evidence it chose to reveal publicly.

The ODNI based its conclusion on MBS’ “control of decision-making in the Kingdom since 2017, the direct involvement of a key adviser and members of Muhammad bin Salman’s protective detail in the operation, and the Crown Prince’s support for using violent measures to silence dissidents abroad, including Khashoggi.” Given MBS’s “absolute control of the Kingdom’s security and intelligence organizations,” the ODNI found it “highly unlikely that Saudi officials would have carried out an operation of this nature without the Crown Prince’s authorization.”

To the disappointment of human rights activists, the Biden administration nonetheless determined that it would impose no direct punishment on MBS.  Sanctioning MBS, according to an anonymous senior official quoted in The Washington Post, would have been viewed in the kingdom as an “enormous insult,” making an ongoing relationship with Saudi Arabia “extremely difficult, if not impossible.”  After having looked at the MBS case extremely closely over the course of about five weeks, the senior official said that the Biden foreign policy team had reached the “unanimous conclusion” that there was “another more effective means to dealing with these issues going forward.”  As US Secretary of State Antony Blinken stated at a public press conference, sounding eerily like former President Trump, the relationship with Saudi Arabia is “bigger than any one individual.”

The Biden administration did identify 76 other Saudi officials subject to sanctions for their presumed roles in the killing.  President Biden also announced the end of US military supplies and intelligence sharing for the Saudi military intervention in Yemen. He has moreover refused to speak directly with MBS, restricting his contact to his father, King Salman.  For the time being, MBS’ Washington contacts as the Saudi defense minister stop at the level of the US Secretary of Defense, Lloyd Austin.

* * *

These protocol decisions will have to be revisited if, as expected, MBS becomes king when his ailing father dies.  One way or another, the United States will need to find a way to deal with a man likely to be a consequential figure on the world stage for decades to come.

Thomas H. Peebles

La Châtaigneraie, France

August 31, 2021




Filed under American Politics, Biography, Politics

Viewing Responsibility for Human Rights Through a Forward-Looking Lens




Kathryn Sikkink, The Hidden Face of Rights:

Toward a Politics of Responsibilities (Yale University Press, 2020)

Kathryn Sikkink, Professor at the Harvard Kennedy School of Government, is one of the leading academic experts on international human rights law­­—the body of principles arising out of a series of post-World War II human rights treaties, conventions, and other international instruments. Recently, I reviewed her Evidence for Hope: Making Human Rights Work in the 21st Century here.  In that work, Sikkink took on a host of critics of the current state of international human rights law who had challenged both its legitimacy and its effectiveness.  Before Evidence for Hope, she was the author of the highly acclaimed Justice Cascade: How Human Rights Prosecutions Are Changing World Politics, where she argued forcefully for holding individual state officials, including heads of state, accountable for human rights violations.

Now, Sikkink asks us to look at human rights, and especially how we can best implement those rights, through a different lens.  In her most recent work, The Hidden Face of Rights: Toward a Politics of Responsibilities, portions of which were originally delivered as lectures at Yale University’s Program in Ethics, Politics and Economics, Sikkink argues that we need to increase our focus on the duties, obligations, and responsibilities undergirding human rights. Although “duties,” “obligations,” and “responsibilities” are nearly functional equivalents, “responsibilities” is Sikkink’s preferred term. Moreover, Sikkink is concerned with what she terms “forward-looking” rather than “backward-looking” responsibilities.

Forward-looking responsibility turns largely on the development of norms, the voluntary acceptance of mutual responsibilities about appropriate behavior.  It stands in contrast to backward-looking responsibilities, which are based on a “liability model” that asks who is responsible for a violation of human rights and how that person or institution can be held accountable — or responsible.  Sikkink seeks to supplement rather than supplant the liability model, describing it as appropriate in some contexts but not others.  Although necessary, backward-looking responsibilities “cannot address many of the complex, decentralized issues that characterize human rights today” (p.40), she contends.

For Sikkink, forward-looking responsibility is ethical and political, not legal.  She is not arguing to make forward-looking responsibilities legally binding.  Nor is she seeking to create new rights—only to implement existing ones more effectively.  But she uses the term ‘human rights’ broadly, to include the political, civil, economic, and social rights embodied in the major post-war treaties and conventions, along with new rights, such as the right to a clean environment and to freedom from sexual assault.

The crux of Sikkink’s argument is that voluntary acceptance of norms ‑ not fear of sanctions ‑ is in most cases a more effective path to full implementation of human rights.  Sustaining and reinforcing norms entails a pragmatic, “what-might-work” approach, brought about by “networked responsibilities,” one of her key terms, a collective effort in which all those connected to a given injustice — the “agents of justice,” more often private individuals than state actors— step forward to do their share. One of Sikkink’s principal objectives is to bring the theory of human rights into line with existing practice.

Sikkink notes that the activist community charged with implementation of human rights already has “robust practices of responsibility. But it does not yet have explicit norms about the responsibility of non-state actors in implementing human rights” (p.36).  Rights activists are reluctant to talk about responsibilities of non-state actors out of concern that such talk might “take the pressure off the state, risk blaming the victim, underplay the structural causes of injustice, or crowd out other more collective forms of political action” (p.5).  Human rights activists, Sikkink emphasizes, while avoiding recognizing responsibility explicitly, have nonetheless implicitly “assumed responsibility and worked in networks with other agents of justice to bring about change” (p.127).  In this sense, responsibilities are the “hidden face of rights, present in the practices of human rights actors, but something that activists don’t talk about” (p.5).

the first third of the book, Sikkink establishes the theoretical framework to a forward-looking conception of human rights implementation. In the last two-thirds, she applies her forward-looking model to five issues that are close to her heart and home: voting, climate change, sexual assault, digital privacy, and free speech on campus.  Her discussion of these issues is decidedly US-centric, based mostly on how they arise at Harvard and, to a lesser extent, on other American university campuses, with only minimal reference to what a forward-looking approach to implementation of the same rights might entail in other countries.  Among the five issues, voting receives the most extensive treatment, about one-third of the book, as much as the other four topics combined.  Several factors prompted me to question whether voting is the best example of forward-looking responsibility in operation.

* * *

In the voting context, forward-looking responsibility means above all the acceptance of a norm that considers voting a non-negotiable responsibility of citizenship, much like serving jury duty and paying taxes. But we also have a “networked responsibility” to convince others both to accept the voting norm, and to assist them in executing that right.  Sikkink’s discussion zeroes in on how to increase voter turnout among Harvard students and, through focus-group sessions with such students, examines the challenges of persuading them to accept the voting norm.

Sikkink recognizes that Harvard students are far from representative of American university students, let alone of Americans generally.  Although at the pinnacle of privilege in American society, Harvard students, like their peers at other universities, nonetheless under-participate in local and national elections. The difficulties they encounter in registering to vote and casting their ballots are a telling indication that the electoral system is complex for far wider swaths of the American public.  But focusing on them leaves out the consideration of how to reach and persuade less privileged groups.  A few of Stacey Abrams’s insights would have been useful.

Skkink’s book, moreover,  went to press prior to the November 2020 Presidential Election, an election in which approximately 159 million Americans voted — a record turnout, constituting about two-thirds of the eligible electorate and seven whopping percentage points higher than the 2016 turnout. Yet, the election and its aftermath have given rise to unprecedented turmoil, including unsupportable claims of a “stolen” election and an uprising at the U.S. Capitol in January, fundamentally altering the national conversation over voting in the United States from what it was a year ago. Sikkink’s concerns about voter apathy no longer seem quite so central to that conversation.

Rather, more than six months after the election, a substantial minority of the American electorate still adheres to the notion of a “stolen” election, despite overwhelming evidence that the official election results were fully accurate within any reasonable margin of error.  In the aftermath of the election, furthermore, state legislatures in several states have adopted or have under consideration measures that seem designed specifically to discourage some of America’s most vulnerable groups from voting, under the guise of preventing voter fraud — even though evidence of actual fraud in the 2020 election was scant to non-existent.  Sikkink foresees this issue when she notes that state officials in some parts of the United States “do not want to expand voter turnout and even actively suppress it” (p.111).   In such situations, she writes, “networked responsibility of non-state actors to change voting norms and practices is all the more important” (p.111).  If Sikkink were writing today, it seems safe to say that she would elaborate upon this point at greater length.

Unlike some of the rights Sikkink discusses, however, voting to select a country’s leaders is firmly established in written law.  But the responsibility side of this unquestioned right must compete with a plausible claim that in a democratic society based on freedom of choice, a right not to vote should be recognized as a legitimate exercise of that freedom — a way, for instance, of expressing one’s disenchantment with the electoral and political system or, more parochially, dissatisfaction with the candidates offered on the ballot.  Many of the students in the Harvard focus group expressed the view that voting should be “situational and optional” (p.92).  Sikkink emphatically rejects this argument, suggesting at one point that casting a blank ballot is the only responsible way to express such views: “if one is going to refuse to vote in protest, it must be just as hard as voting” (p.121), she writes.

By coincidence, as I was wrestling with Sikkink’s arguments against recognizing a right not to vote in June of this year — and finding myself less than fully convinced — I was following presidential elections in Iran, which witnessed its lowest voter turnout in four decades: slightly less than 50%, with another 14% casting blank ballots.  Dissidents in Iran organized a campaign this year that urged abstention as the most principled way to express opposition to what the campaign leaders maintained was an intractably tyrannical regime.

The abstention campaign argued that the voting process for the election had been structured to eliminate any serious reform candidates; that the Iranian government since 1979 had an extensive track record of voter intimidation and manipulation of vote counting; and that the Iranian government uses the usually high turnout rates (85% for the 2009 presidential election; over 70% in 2013 and 2017) to affirm its own legitimacy. In short, there seemed to be little reason why Iranians could anticipate that the election would be “free and fair,” which may be the necessary predicate to Sikkink’s rejection of a right not to vote, a point she may wish to elaborate upon subsequently (were she writing today, Sikkink might also address the “freedom” not to wear a mask or to be vaccinated during a pandemic; I also wondered how Sikkink would react to regional French elections, which took place immediately after the Iranian election, in which an astounding two-thirds of the electorate abstained).

If Sikkink’s application of forward-looking responsibility to voting contains rough edges, her application to climate change makes for a near perfect fit. While it is obviously of utmost importance to know the underlying causes of climate change and to understand how we reached the current crisis, backward looking responsibility — seeking to hold responsible those who contributed to the crisis — has only limited utility.  Without letting big fossil fuel polluters off the hook for their disproportionate contribution to the current state of affairs, backward looking responsibility “must be combined with forward-looking responsibilities,” Sikkink argues, “including the responsibilities of actors who are not directly to blame” (p.54).  When it comes to climate change, we are all “agents of justice” if we want to preserve a livable planet.

The backward-looking liability model remains critical when applied to the right to be free from sexual assault, a large umbrella category that includes all non-consensual sexual activity or contact, including but not limited to rape.  Any effort to limit sexual assault must “first hold perpetrators responsible—and, where appropriate, criminally accountable” (p.139), Sikkink writes. But we also need to “think about the forward-looking responsibility of multiple agents of justice, especially how potential victims, as capable agents, can take measures to prevent future violence” (p.138).

Digital privacy, Sikkink explains, transcends the interest of individuals to limit the dissemination of their own personal information.  She describes how we can inadvertently expose others to online privacy invasions.  In protecting privacy online, we need to become proficient in what she terms “digital civics,” another term for the forward-looking responsibility of Internet users to help ensure both their own privacy rights and those of other users.

A separate but related aspect of digital civics is learning how to recognize and not spread disinformation, “fake news,” thereby raising questions about the bounds of the right to free speech online.  We all have an ethical and political responsibility, if not quite a legal one, to evaluate sources and to refrain from sharing (or “liking”) information that does not appear to have sound factual grounding, Sikkink argues. The extent of the bounds of free speech also arises on campus in finding a balance between the right to speak itself, and the right to protest speech that one finds offensive.

On university campuses today, many students feel they have an obligation to defend fellow students, and oppressed people generally, against hurtful and degrading speech. Sikkink notes that over half the students responding to one survey thought it was acceptable to shout at speakers making what they perceived to be offensive statements, while 19% said it was acceptable to use violence to prevent what is perceived to be abusive speech. These are not responsible exercises of one’s right to protest offensive speech, Sikkink responds.  Violence and drowning out the speech of others are more than just “problematic from the point of view of the ethic of responsibility” (p.136).  Pragmatically, these forms of protest have been demonstrated to be unlikely to generate support for the ideas espoused by those using such tactics.

* * *

Pragmatism thoroughly infuses Sikkink’s notion of forward-looking responsibility, as applied not only to campus speech and the other rights discussed here but, presumptively, to the full range of recognized human rights.   Her pragmatism animates the question she closes the book with, literally her bottom line: in addition to — or even instead of — asking who is to blame, we should ask: “What together we can do?” (p.148).  As her fellow academic theorists evaluate the fresh perspective that Sikkink brings to international human rights in this compact but thought-provoking volume, they will want to weigh in on the pertinence of this question to our understanding of those rights.


Thomas H. Peebles

Caen, France

August 21, 2021

[NOTE: A nearly identical version of this review has also been posted to the Tocqueville 21 blog, maintained in connection with the American University of Paris’ Tocqueville Review and its Center for Critical Democracy Studies]





Filed under Political Theory, Rule of Law

Deciphering a Confounding Thinker



Robert Zaretsky, The Subversive Simone Weil:

A Life in Five Ideas (University of Chicago Press)


Simone Weil is considered today among the foremost twentieth-century French intellectuals, on par with her luminous contemporaries Simone de Beauvoir, Jean-Paul Sartre, and Albert Camus. And yet she was not widely known when she died at age 34 in 1943. Although she wrote profusely, only small portions of her writings were published during her lifetime. Much of her written work was left in private notebooks and published posthumously. It was only after the Second World War, as Weil’s writings increasingly came to light, that a comprehensive picture of her thinking emerged —comprehensive without necessarily being coherent. In The Subversive Simone Weil: A Life in Five Ideas, Robert Zaretsky attempts to provide this coherence.

Indeed, Weil was a confounding thinker whose body of thought and the life she lived seem awash in contradictions. As Zaretsky notes at the outsetWeil was:

an anarchist who espoused conservative ideals, a pacifist who fought in the Spanish Civil War, a saint who refused baptism, a mystic who was a labor militant, a French Jew who was buried in the Catholic section of an English cemetery, a teacher who dismissed the importance of solving a problem, [and] the most willful of individuals who advocated the extinction of the self (p.2).

 Zaretsky, a professor at the University of Houston and one of the Anglophone world’s most fluent writers on French intellectual and cultural history, aims not so much to dispel these contradictions as to distill Weil’s intellectual legacy, contradictions and all, into five core ideas encapsulating the body of political, social, and theological thought she left behind. These five ideas are: affliction, attention, resistance, rootedness, and goodness—each the object of a separate chapter.

Unsurprisingly, these five Weilian ideas are far more intricate and multi-faceted than the single words suggest, and they are inter-related, with what Zaretsky terms “blurred borders” (p.14).  Moreover, the five ideas are presented in approximate chronological order: the first three chapters on affliction, attention, and resistance concern mostly Weil in the 1930s; while the last two on rootedness and goodness primarily cover her wartime years from 1940 to 1943—her most productive literary period.

Each chapter can be read as a standalone essay, and Zaretsky would likely discourage us from searching too eagerly for threads that unite the five into an overarching narrative. But there is one connecting thread which provides context for the apparent contradictions in Weil’s life and thought: collectively, the five ideas tell the story of Weil’s transformation from an exceptionally empathetic yet otherwise conventional 1930s non-communist, left-wing intellectual—Jewish and secular—to someone who in her final years found commonality with conservative political and social thought, embraced Catholicism and Christianity, and was profoundly influenced by religious mysticism. Although not intended as a biography in the conventional sense, The Subversive Simone Weil begins with a short but helpful overview of Weil’s abbreviated life before plunging into her five ideas.

* * *

Weil was born in 1909 and brought up in a progressive, militantly secular bourgeois Jewish family in Paris. Her older brother André became one of the twentieth century’s most accomplished mathematicians. She graduated in 1931 from France’s renowned École Normale Supérieure, the same school that had accorded diplomas to Jean-Paul Sartre and Raymond Aron a few years earlier.  After ENS, she took three secondary teaching positions in provincial France, and also managed to find her way to local factories, where she taught workers in evening classes and with limited success did some of the hard factory work herself.

In 1936, Weil joined the Republican side in the Spanish Civil War, and was briefly involved in combat operations before she inadvertently stepped into a vat of boiling cooking oil, severely injuring her foot. After she returned to France to allow her injury to heal, she had three seemingly genuine mystical religious experiences that set in motion what Zaretsky characterizes as rehearsals for her “slow and never quite completed embrace of Roman Catholicism” (p.134).  When Nazi Germany invaded France in 1940, Weil and her parents caught the last train out of Paris for Marseille, where they stayed for almost two years before leaving for New York. While in Marseille, Weil was deeply influenced by Joseph-Marie Perrin, a nearly blind Dominican priest, and came close but stopped short of a formal conversion to Catholicism.

Weil left her parents in New York for London, where she joined Charles de Gaulle’s government-in-exile, with ambitions that never materialized to return to France to battle the Nazis directly. While in London, her primary responsibility was to work on reports detailing a vision for a liberated and republican France. Physically frail most of her life, Weil suffered from migraines, and may have been on a hunger strike when she died of complications from tuberculosis in 1943, in a sanatorium south-east of London.

* * *

Malheur was Weil’s French term for “affliction.” This is the first of the five ideas that Zaretsky distills from Weil’s life and thought, in which we see Weil at her most political. Her idea of affliction appears to have arisen principally from her experiences working in factories early in her professional career.  Yet, affliction for Weil was the condition not just of factory workers, but of nearly all human beings in modern, industrial society—the “unavoidable consequence of a world governed by forces largely beyond our comprehension, not to mention our control” (p.36).  Affliction was “ground zero of human misery” (p.36), entailing psychological degradation as much as physical suffering.

The early Weil was attracted politically to anarcho-syndicalism, a movement that urged direct action by workers as the means to achieve power in depression-riddled 1930s France, with direct democracy of worker co-operatives as its end. In these years, Weil was an “isolated voice on the left who denounced communism with the same vehemence as she did fascism” (p.32), Zaretsky writes, comparing her to George Orwell and Albert Camus. With what Zaretsky describes as “stunning prescience” (p.32), she foresaw the foreboding consequences of totalitarianism emerging both in Stalin’s Russia and Hitler’s Germany.

Attention, sometimes considered Weil’s central ethical concept, involves how we see the world and others in it. But it is an elusive concept, “supremely difficult to grasp”  (p.46).  Attention was attente in French: waiting, which requires the canceling of our desires.  Attention takes place in what Zaretsky terms the world’s salle d’attente, its waiting room, where we “forget our own itinerary and open ourselves to the itineraries of others” (p.54).  Zaretsky sees the idea of attention at work in Weil’s approach to teaching secondary school students, where her emphasis was on identifying problems rather than finding solutions. She seemed to be telling her students that it’s the going there, not getting there, that counts. Although not discussed by Zaretsky, there are echoes of Martin Buber’s “I-Thou” relationship in Weil’s notion of attention.

Zaretsky refrains from terming the Spanish Civil War a turning point for Weil, but it seems to have been just that.  Her brief experience in the war, combined with a growing realization of the existential threat which the Nazis and their fascist allies posed to European civilization, prompted her to revise her earlier commitment to pacifism. This is one consequence of resistance—Zaretsky’s third idea — which aligned Weil with the ancient Stoics and Epicureans, who taught their followers to resist recklessness, panic and passion. For Weil, resistance was an affirmation that the “truly free individual is one who takes the world as it is and aligns with it as best they can” (p.64), as Zaretsky puts it. Weil’s Spanish Civil War experience also gave rise to a growing conviction that “politics alone could not fully grasp the human condition” (p.133).

Rootedness—the fourth idea—arises out of Weil’s visceral sense of having been torn from her native France.  Déracinement, uprooting, was the founding sentiment for The Need for Roots, her final work, in which she emphasized how the persistence of a people is tied to the persistence of its culture—a community’s “deeply engrained way of life, which bends but is not broken as it carries across generations” (p.99).  Rootedness takes place in a “finite and flawed community” and became for Weil the “basis for a moral and intellectual life.” A community’s ties to the past “must be protected for the very same reason that a tree’s roots in the earth must be protected: once those roots are torn up, death follows” (p.126).

There is no evidence that Weil read either the Irish Whig Edmund Burke or the German Romantic Johann Herder, leading conservatives of the late eighteenth and early nineteenth centuries.  Nonetheless, Zaretsky finds considerable resonance between Weil’s sense of rootedness and Burke’s searing critique of the French Revolution, as well as Herder’s rejection of the universalism of the Enlightenment in favor of preserving local and linguistic communities.  Closer to her own time, Weil’s views on community aligned surprisingly with those of Maurice Barrès and Charles Maurras, two leading early twentieth-century French conservatives whose works turned on the need for roots. Zaretsky also finds commonalities between Weil and today’s communitarians, who reject the individualism of John Rawls.

But Weil also applied her views on rootedness to French colonialism, putting her at odds with her wartime boss in London, Charles de Gaulle, who was intent upon preserving the French Empire.  She perceived no meaningful difference between what the Nazis had done to her country—invaded and conquered—and what the French were doing in their overseas colonies.  Weil was appalled by the notion of a mission civilisatrice, a civilizing mission underlying France’s exertion of power overseas. It was essential for Weil that the war against Germany “not obscure the brute fact of French colonization of other peoples” (p.111).  Although Weil developed her idea of rootedness in the context of forced deportations brought about by Nazi conquests, she recognized that rootlessness can occur without ever moving or being moved. Drawing upon her idea of affliction, Weil linked this form of uprooting to capitalism and what the nineteenth-century English commentator Thomas Carlyle termed capitalism’s “cash nexus.”

Zaretsky’s final chapter on Goodness addresses what he terms Weil’s “brilliant and often bruising dialogue with Christianity” (p.134), the extension of her three mystical experiences in the late 1930s.  The battle was bruising, Zaretsky indicates, because as a one-time secular Jew Weil’s desire to surrender wholly to the Church’s faith ran up against her indignation at much of its history and dogma.  “Appalled by a religion with universal claims that does not allow for the salvation of all humankind,” Weil “refused to separate herself from the fate of unbelievers. Anathema sit, the Church’s sentence of banishment against heretics filled Weil with horror” (p.135).  Yet, in her final years, Catholicism became the “substance and scaffolding of her worldview” (p.34), Zaretsky writes.

But Zaretsky’s emphasis is less on Weil’s theological views than on how she found her intellectual bridge to Christianity through the ancient Greeks, especially the thought of Plato.  Ancient Greek poetry, art, philosophy and science all manifested the Greek search for divine perfection, or what Plato termed “the Good.”  For Weil, faith appears to have been the pursuit of Plato’s Good by other means. The Irish philosopher and novelist Iris Murdoch, who helped introduce Weil to a generation of British readers in the 1950s and 1960s, explained that Weil’s tilt toward Christianity amounted to dropping one “o” from the Good.

* * *

Simone Weil was a daunting figure, intimidating perhaps even to Zaretsky, who avers that her ability to plumb the human condition “runs so deep that it risks losing those of us who remain near the surface of things” (p.38).  Zaretsky, however, takes his readers well below the surface of her body of thought in this eloquent work, producing a comprehensible structure for understanding an enigmatic thinker. His work should hold the interest of readers already familiar with Weil and those encountering her for the first time.

Thomas H. Peebles

La Châtaigneraie, France

July 31, 2021

[NOTE: A nearly identical version of this review has also been posted to the Tocqueville 21 blog, maintained in connection with the American University of Paris’ Tocqueville Review and its Center for Critical Democracy Studies]




Filed under French History, Intellectual History, Political Theory, Religion

Breaking Away


J.H. Elliot, Scots and Catalans:

Union and Disunion (Yale University Press)

[NOTE: This review has also been posted to the Tocqueville 21 blog, maintained in connection with the American University of Paris’ Tocqueville Review and its Center for Critical Democracy Studies]

Are the United Kingdom and Scotland barreling toward a crisis over Scottish independence of the magnitude of that which rattled Spain in 2017, when Catalonia, the country’s northeast corner that includes Barcelona, unilaterally declared its independence? That possibility seems less far-fetched after early May’s parliamentary elections in Scotland, in which the Scottish National Party (SNP) fell just one seat shy of an absolute majority. In coalition with the Scottish Green Party, the SNP is now in a position to set the legislative agenda for Scotland. To no one’s surprise, Nicola Surgeon, Scottish First Minister and SNP leader, announced after the recent elections that she would seek a second referendum on Scottish independence, presumably similar to the one that took place in 2014. For Sturgeon, a second independence referendum is now a matter of “when, not if.”  But British Prime Minister Boris Johnson reiterated his opposition to another referendum; that of 2014 was a “once in a generation” event, the Prime Minister explained.

Separatism also advanced appreciably in Catalan regional elections in February of this year, with pro-independence parties capturing a clear majority of seats in the regional parliament. But numerous parties with a range of views on separation seek to carry the independence banner in Catalonia. The movement has no single voice comparable to that of Sturgeon and the SNP.

While no one can say with certainty where Scotland and Catalonia are heading, J.H. Elliot, Regius Professor Emeritus at Oxford University, has produced an extraordinarily timely, in-depth guide to how separatism has come to dominate the 21st century politics of each: Scots and Catalans: Union and Disunion. From the mid-15th century up through the Catalan crisis of 2017, Elliot traces the relationship of Scotland and Catalonia to the larger entities we now call Great Britain and Spain, relationships in which genuine grievances mix with myths, resentments, and manipulations of history.

The Catalan crisis of 2017, the endpoint in Elliot’s narrative, ensued after regional authorities organized a non-binding independence referendum, conducted over the strong objection of the central government in Madrid.  90% of Catalans who voted approved the referendum, but several major Catalan parties boycotted it and only 43% of eligible voters actually voted. When the Catalan regional parliament adopted a resolution declaring the region an independent republic, the central government responded by invoking the 1978 Spanish constitution to remove regional authorities and impose direct rule from Madrid.  Carles Puigdemont, the Catalan regional president, was formally accused of treason and fled to Belgium with key members of his cabinet, where he remains to this day.

In sharp contrast to the 2017 Catalan initiative, the 2014 Scottish independence referendum had the approval of the central government in London, having been negotiated by Johnson’s predecessor, David Cameron.  Scottish voters moreover soundly rejected independence: 55%- 45%, with 85% of eligible voters casting ballots. But one of the main issues in the campaign was the desire of many Scottish voters to maintain membership in the European Union as part of the United Kingdom, rather than secede and apply for EU membership as an independent nation. The Brexit referendum two years later, also a Cameron-approved measure, upended this understanding. While a far from united United Kingdom approved the initiative to leave the European Union, Scottish voters adhered to the “remain” position by an emphatic 62%-38% margin, with about two-thirds of eligible Scottish voters participating.

Elliot is scathing in his condemnation of the Catalonian secessionists’ decision to press ahead in 2017 with their unilateral declaration of independence, describing it as an “act of folly, unleashing consequences that never seemed to have crossed the proponents’ minds as they took the plunge” (p.263).  In more muted terms, he appears to endorse the outcome of the orderly 2014 referendum in Scotland: “Stability had triumphed over risk, pragmatism over utopianism, fear over hope” (p.246).  But Elliot treats the Brexit referendum two years later only in two non-judgmental paragraphs.  Many Scots who voted “No” in 2014 have felt compelled to reassess their position in light of Brexit. Elliot’s decision not to weigh in more forcefully on the impact of Brexit constitutes a missed opportunity in this otherwise painstakingly comprehensive work.

Although Elliot focuses almost exclusively on the Catalan and Scottish independence movements, easily the most visible in today’s Europe, they are hardly the only ones. Depending upon how one counts, there are presently about 20 active separatist movements in Europe, some of which seem to be mainly quests for more autonomy rather than secession.  Finding common denominators among them can be difficult – each is mostly a product of its own historical and cultural circumstances. But nationalism is usually considered one such denominator, often the only one, and what Elliot terms a “resurgent nationalism” (p.4) is at play in both Catalonia and Scotland.

These and other 21st century secessionist movements harken back to the classical 19th century European version of nationalism: the idea that a people with a common culture and history — and often a common language, as in Catalonia – have an inherent right to rule themselves. This idea, which buttressed Europe’s 1848 uprisings, produced the modern nation-state, a state with a nationalist creed binding it together — a common core of shared principles, traditions and values accepted by its disparate regions, and its major ethnic, religious, and cultural groups. But separatist movements in Scotland, Catalonia and elsewhere are predicated on a rejection, implicit if not explicit, of the nationalist creed and in this sense are the antipode of classical 19th century nationalism. Some separatist movements partake of xenophobic and authoritarian-leaning nationalist impulses. But neither the Scottish nor the Catalan independence movement can be described in these terms – if anything, both Scotland and Catalonia tilt leftward on 21st century Europe’s left-right pendulum.

Scots and Catalans consists of six chapters, each focused on a discrete historical period. It begins with “Dynastic Union, 1469-1625.” 1469, the year in which Ferdinand of Aragon married Isabelle of Castile, marked the beginning of a composite, multi-regional monarchy on the Iberian Peninsula, with the Crown of Aragon including the principality of Catalonia. The last chapter, “Breaking Away? 1975-2017” covers the time from the death of Spanish dictator General Francisco Franco and the beginnings of modern democracy in Spain in 1975, up through the Catalan constitutional crisis of 2017.  Unlike many comparative histories, Elliot does not rely on separate chapters for his two subjects. His narrative goes back and forth between Catalonia and Scotland, Spain and Britain, setting out the two histories side-by-side. Although not quite his intention, this technique highlights how different Catalonia’s relationship to Spain has been from that of Scotland to Great Britain from the early 18th century onward. Only in the late 20th and early 21st centuries does Elliot find significant convergences between the two independence movements.

* * *

Prior to its 1707 the union with England, Scotland had been an independent kingdom, one shaken by the 17th century’s religious and civil wars that had upended its more powerful neighbor to the South.  Catalonia, by contrast, had never been a sovereign state in any modern sense of the term.  But as one of several rebellious provinces within Spain’s composite monarchy, Catalonia had a colorable claim to a set of ancient liberties and privileges that the Nueva Planta decrees of Phillip of Anjou, the first Bourbon King of Spain, erased between from 1707 and 1716.

Designed to impose the centralized French model on Spain’s unruly provinces, the Nueva Planta decrees abolished the Catalan legislature and imposed the Castilian language – today’s Spanish — on the region. While Scotland’s consensual association with England was the result of genuine negotiations between two sovereign kingdoms, Catalonia was “subjected to a settlement imposed by a victorious monarch, who stigmatized its peoples as rebels” (p.89).  Catalonia came to be seen, both by its citizens and the central government in Madrid, as a territory under military occupation.

Throughout the 18th and 19th centuries, the feeling in Spain that the Catalans were inherently intractable never disappeared. Catalans, constantly inveighing against “centralization,” responded to pressures from Madrid by emphasizing with “growing stridency” the “uniqueness of their own history and culture” (p.163), Elliot writes. By contrast, the Scots felt less need to be assertive about their distinctive heritage, and less obsessed about their potential loss of identity. Tensions between London and Edinburgh were “far fewer than those to be found in the Barcelona-Madrid relationship” (p.163).

Spain fell under the rule of two military dictatorships in the 20th century. That of Primo de Rivera, from 1923 to 1930, preceded the 1936-39 Spanish Civil War and the ensuing Franco regime, which lasted until the General’s death in 1975. Both de Rivera and Franco pursued national unity by ruthlessly suppressing regionalist tendencies across Spain. But Franco probably distrusted Catalonia more than any other region during his long rule. Spain did not begin its transition to a modern democratic nation-state until after Franco’s death.

In 1979, following the first free elections in Spain since the 1930s in 1977, Catalan voters approved a statute of autonomy for the region that recognized Catalonia as a “nationality,” gave the Catalan language an official status equal to Castilian Spanish, and conceded extensive powers to Catalonia in education, culture and language. Catalonia henceforth became what Elliot describes as an “integral but largely self-governing part of what the bulk of its inhabitants had long wanted – a democratic, decentralized and modernizing Spain” (p.229).

1979 was also the year Margaret Thatcher and her Conservative Party were voted into office in Britain. Thatcher moved quickly to shut down all talk about “devolution,” which envisioned re-establishing the Scottish parliament and according more autonomy to Scotland. In Elliot’s view, Thatcher probably did more to spur the modern separatist movement in Scotland than any other single individual. Devolution came to Scotland in 1997, when Scottish voters approved creation of an independent Scottish parliament, its first since the 1707 union with England. By 1997, Scotland enjoyed approximately the same degree of autonomy from the central government in Westminster that Catalonia had achieved in 1979.

Elliot further fits both independence movements into a broader 21st century framework, wherein pressures upon the traditional nation-state from above, driven by the European Union, economic inequalities, and what we often term globalization, have generated a “general sense in many parts of the western world that highly bureaucratized central governments [have] become too remote to understand the true needs and problems of the governed” (p.3).  Separatism for Scotland and Catalonia, as elsewhere, appears to offer an easy answer to those who feel they have lost control over their lives. “Independence [will] allow them once again to be masters in their own house,” he writes. But much of this, he adds tartly, referring more to Catalonia than Scotland, is “nostalgia for a world that never was” (p.267).

* * *

A second independence referendum for Scotland – and with it Scottish independence — now appears, if not inevitable, more probable than not, despite Boris Johnson’s opposition. As Scottish journalist Jamie Maxwell wrote in the New York Times after the May elections, a Johnson veto would be tantamount to “transforming Britain from a voluntary association based on consent into a compulsory one” –– an ironic transformation to the way Catalan secessionists view their relationship to Spain.

Continued political stalemate, rather than realistic prospects for independence, looks like the better bet for Catalonia. The region lacks a leader comparable to Sturgeon, who has ruled out a “wildcat referendum” and is generally cautious, steady and unusually adept at playing the long game – words rarely used to describe former Catalan regional president Carles Puigdemont.  Sturgeon seems confident that Johnson will “ultimately buckle under the weight of democratic pressure,” as Maxwell puts it.  Independence may nevertheless be in the cards in this decade for both Scotland and Catalonia.  But in demonstrating the deep historical dissimilarities between Scotland’s relationship to Great Britain and Catalonia’s to Spain, Elliot’s erudite history suggests that the two entities are likely to travel distinctly different paths to independence.

Thomas H. Peebles

Paris, France

July 8,  2021





Filed under British History, European History, Spanish History

The Case for Evidence-Based Optimism on International Human Rights


Kathryn Sikkink, Evidence for Hope:

Making Human Rights Work in the 21st Century

(Princeton University Press) 

The idea of international human rights – rights that transcend national boundaries and state sovereignty – crystallized in the post-World War II period with the adoption of the initial charter of the United Nations in 1945 and the Universal Declaration of Human Rights (UDHR) in 1948.  Promulgated under the auspices of the United Nations, the UDHR is considered the founding text of today’s international human rights law.  As Kathryn Sikkink observes in Evidence for Hope: Making Human Rights Work in the 21st Century, there is a compelling simplicity to the idea of international protection for human rights: “if your government fails to protect your rights, you have somewhere else to turn for recourse” (p.57).

Today, there are several treaties and conventions supplementing and complementing the UDHR.  Almost all the world’s countries have ratified some or all of these instruments, while numerous international and non-governmental organizations, institutions, and practitioners monitor compliance and otherwise seek to advance the international human rights agenda across the globe.  The idea of international human rights has become, Sikkink writes, “one of the dominant moral and political discourses in the world today” (p.8).  And yet.

The general public seems convinced that human rights abuses are worsening and widening across the globe, not diminishing, and there is much to support this view in just about any edition of a daily newspaper: China’s assault on its predominantly Muslim Uyghur population; crackdowns on democracy proponents in Myanmar and Hong Kong; refugee crises brought about by civil wars in Syria, Yemen and Ethiopia; extra-judicial killings in the Philippines; and Russian targeting of political dissidents, to name only a few.  Moreover, one year ago, the respected watchdog organization Human Rights Watch issued a withering report on the United States, finding that it was moving “backwards” on human rights, flouting international human rights and humanitarian law.

In addition, there is no shortage of academics taking aim at the international human rights movement.  Assiduous readers of this blog will recall Stephen Hopwood’s, Endtimes for Human Rights, reviewed here in 2016.  Eric Posner has produced a work with an equally gloomy title, The Twilight of Human Rights Law.  And Samuel Moyn’s Not Enough: Human Rights in an Unequal World was the subject of an extensive roundtable exchange that my colleagues at the Tocqueville 21 blog organized in 2018.  These and other works make different points, but together constitute what might be termed “human rights pessimism,” a now-substantial body of thought calling into question both the legitimacy and effectiveness of the post-World War II human rights movement.

Sikkink, Professor of Human Rights Policy at Harvard Kennedy School of Government, seeks to counter the various manifestations of human rights pessimism with “evidence-based optimism,” an optimism grounded “not on wishful thinking, but on an effort to understand more comprehensively the strengths and weaknesses of human rights data” (p.13).  She describes her purpose as “not to deflect criticism or to diminish concern with human rights crises, but to clarify some of the terms of the debate, the types of comparisons being used, and the kinds of evidence that would be more or less persuasive in supporting and evaluating claims” (p.8).

Methodically, but with a scholarly zest, Sikkink scrutinizes the pessimists’ challenges to the distinct but related issues of the legitimacy and the effectiveness of the modern human rights movement.  In this context, legitimacy turns principally on the notion that forms the crux of Hopgood’s case against the modern human rights movement: that its core principles are the product of the world’s most prosperous countries, especially those from Western Europe and North America, imposed upon its least prosperous ones, in Asia, Africa and Latin America – the “Global North” and “Global South” respectively, to use common shorthand.  Sikkink responds by demonstrating the often overlooked contributions of diplomats, lawyers, and intellectuals from the Global South to the body of thought that preceded the UN Charter and the UDHR, along with their contributions to those instruments and to the development of international human rights law after promulgation of the UDHR.

Measuring human rights effectiveness and such related matters as “progress,” “results,” and “success” constitute what Sikkink terms the “single biggest unrecognized and unnamed source of disagreement among human rights scholars and within human rights movements” (p.31).  Such measurement requires rigorous application of social science methodologies, Sikkink insists.  She gives unsatisfactory grades to most of the human rights pessimists, primarily because they frequently measure not on an empirical basis, using qualitative and quantitative data, but rather against an ideal standard of what would be commendable in a more perfect world.

* * *

Sikkink lays out a convincing case that the international human rights movement’s origins lie at least as much in the Global South as in the Global North — maybe more.  She places particular emphasis upon the Latin American contribution to the movement.  The idea of international protection of human rights, she suggests, probably originated with the early 20th century Chilean jurist Alejandro Álvarez, who in 1917 proposed the idea of international protection for individual human rights to the American Institute of International Law.

Álvarez saw international protection for human rights as a means to bolster rather than undercut state sovereignty, particularly as a weapon for the weaker states of Latin America to contain the greater raw power of the United States.  Álvarez’s ideas were later taken up and expanded by other jurists and scholars from both Latin America and Europe.  This circulation of ideas, Sikkink writes with Hopgood in mind, is “very different from the crude understanding of some scholars today, who claim that human rights ideas all started in the Global North and were imposed upon the Global South” (p.63).

At the San Francisco Conference of 1945 which established the framework for the UN, the British and French delegations resisted formal declarations of rights out of concern for their colonies; American representatives worried about Southern legislators interested above all in protecting racial separation; and the Soviet Union was less than enthusiastic about formal declarations.  Despite this resistance, the Charter emerged with seven human rights references, a testament to the work of delegations from outside  the Global North,  especially those from Latin America. The references reflected “not the language of the great powers,” Sikkink writes, “but rather that of the Global South.”  They were “adopted by the great powers in response to pressure from small states and civil society” (p.71).  Without these references, Sikkink finds it unlikely that the UDHR would have been drafted at all.

But the UDHR was not the first detailed enumeration of rights adopted by an inter-governmental organization.  Several months earlier, in April 1948 in Bogotá, Colombia, 20 Latin American countries and the United States approved the American Declaration of the Rights and Duties of Man.  All of the rights enumerated in the American Declaration appeared subsequently in the UDHR.  Latin American representatives, for example, were responsible for language about duties finding a place in the final version of the UDHR, reflecting their more communitarian and less individualistic vision of freedom in modern society, and they managed to insert an article into the UDHR about the right to justice.

Sikkink counters a related argument that Moyn and others have advanced that the human rights movement lay largely dormant after the UDHR’s adoption until the administration of American President Jimmy Carter in the late 1970s.  Rather, the movement gathered force in the 1950s through the mid-1970s, Sikkink contends, as jurists and diplomats, especially but not exclusively from the Global South, fought to “ensure the creation of institutions with the power necessary to enforce human rights” (p.97).  During these decades of Cold War confrontation, the United States was largely on the sidelines, prioritizing instead its support for anti-communist regimes, many with dubious human rights records.

The decolonization movements of the 1950s and early 1960s linked independence and notions of national self-determination to democracy and human rights.  The anti-apartheid campaign in South Africa, in Sikkink’s view the most important and sustained human rights struggle of the Cold War period, explicitly embraced the notions of human rights embodied in the UDHR.  In 1965, Asian and African countries, led by India, spearheaded passage of the International Convention on the Elimination of All Forms of Racial Discrimination (CERD).

Work also continued during the 1950s and 1960s on the two most consequential follow-up instruments to the UDHR, the International Covenant on Civil and Political Rights (ICCPR) and the International Covenant on Economic, Social and Cultural Rights (ICESCR), drafted over the course of 15 years and opened for ratification in 1966.  The Carter administration helped “activate and eventually consolidate” (p.28) these institutional developments, Sinkkink writes with emphasis, but they had been underway for the previous thirty years (why human rights became a priority of the Carter administration is the subject of Barbara Keys’ Reclaiming American Virtue, reviewed here in 2015).

In measuring effectiveness and what constitutes human rights progress, Sikkiink explains that the difference between empirical comparisons and comparisons to an ideal might be thought of as two different types of lenses or yardsticks: a comparison to the ideal “involves contrasting what has actually happened with what should happen in an ideal world, whereas empirical comparison contrasts what is actually happening to what has happened in the same country in the past or to what is happening in other countries at the same time” (p.3).

As a skilled social scientist, Sikkink expects human rights critics to be clear about their methods.  When comparing cases to the ideal, the ideal should be explicit, not implicit. “This allows others to evaluate the arguments and the quality of the evidence and judge the work, “ she writes.  Too many critics “take for themselves the luxury of criticisms without making their own suppositions sufficiently clear” (p.48).  Applying empirical data to a cross section of key human rights issues across the globe – the status of women, the prevalence of torture and the frequency of the death penalty among them — Sikkink makes the case that there is real progress in the world, after all.  By “looking more carefully at the history of human rights and at current trends we can find hope for progress in spite of struggles and backlash,” (p.247), she concludes.

Evaluating the effectiveness of the human rights movement also needs to consider what Sikkink terms the “information paradox”: the more intense the focus on human rights, the more violations are likely to come to light, especially as reporting improves.  That doesn’t necessarily mean the number of violations are increasing.  Inadvertently, “as the reports accumulate and are taken up by the media, they may also convince people that human rights movements are not making any progress at curbing such violations”  (p.14).

As in much of social science, measuring effectiveness in human rights involves identifying correlations.  On the positive side, democracy and human rights are “intimately related,” Sikkink writes, and, “so far in human history, it is hard to have one without the other.  That does not mean that democracy inevitably leads to human rights; it just means that democracy is a necessary, but not at all sufficient, condition for human rights progress” (p.131-32).   Although there is general agreement among scholars and specialists that democratic political institutions reduce repressive behavior, “research indicates that democratic institutions mainly contribute to decreased repression only after a certain high democracy threshold is reached” (p.193).

On the negative side, there are “risk factors” that signal the potential for increased human rights violations, among them war, particularly civil war; the presence of insurgent groups and separatist movements; ideologies that exclude and dehumanize certain people or groups; and poverty.  Sikkink does not include economic inequality among the risk factors, and treads lightly around the topical question whether there is a correlation between rising economic inequality and the human rights movement.

Such inequality is usually attributed to “neo-liberalism,” a shorthand reference to government policies, often associated with the Thatcher and Reagan years, favoring less-regulated markets, open international trade, and privatization of some former state functions, frequently accompanied by reductions in social safety net benefits.  Some commentators contend that the relationship between human rights and neo-liberalism is one of “complicity,” that human rights policies somehow make possible neo-liberal policies, while Moyn argues that the human rights movement has been “powerless against inequality” (p.38).

Sikkink respondsthat human rights policies can reduce economic inequalities.   The human rights movement has made significant inroads in reducing gender inequalities, for example, with an impact on overall economic inequality.  But more fundamentally, while mitigating economic inequalities is a laudable goal — both within and among nation-states – it is best achieved by policies like more progressive taxation, closing tax havens, preventing money laundering and cracking down on corruption.  Human rights movements, she contends, “don’t have to be the only tools for fighting inequality” (p.239).

 * *

Sikkink does not pretend that the modern human rights movement is flawless.  But her spirited defense of an imperfect movement should be reassuring, offering evidence-based reasons not only to reject despair but even to hope that the movement in the 21st century can continue, as it has done in the past, to deliver empirically measurable progress.

Thomas H. Peebles

Paris, France

May 31, 2021




Filed under History, Rule of Law, World History

Geopolitical Angle to Muslim Sectarian Intolerance and Violence




Kim Ghattas, Black Wave:

Saudi Arabia, Iran and the Rivalry That Unraveled the Middle East

(Henry Hill & Co) .

“What happened to us?”  That is the question that Kim Ghattas asks at the outset of Black Wave: Saudi Arabia, Iran and the Rivalry That Unraveled the Middle East.  It is a question that “haunts us in the Arab and Muslim world” (p.1), she writes.  Ghattas, a Lebanese journalist who once worked as Middle East correspondent for the BBC, remembers hearing from her parents and her parents’ generation about a time when young people in the Muslim world spent their days  “reciting poetry in Peshawar, debating Marxism late into the night in the bars of Beirut, or riding bicycles on the banks of the Tigris River in Baghdad” (p.1). That halcyon world is gone.  Today the same Arab and Muslim world is defined and divided by religious sectarianism, intolerance and violence.  Ghattas seeks to explain how and why the world of her parents became “unraveled,” to borrow from her title.

Ghattas’ title indicates that much of the explanation can be found in the intensifying geopolitical rivalry between Iran and Saudi Arabia.  The two countries are presently locked in what she terms a “destructive competition for leadership of the Muslim world, in which both countries wield, exploit, and distort religion in the more profane pursuit of raw power” (p.2), fighting proxy wars in such places as Syria, Iraq and Yemen.   Religion, and more specifically the religious schism between Shiite and Sunni branches of the Islamic faith, lie inescapably close to the core of the Iranian-Saudi rivalry.

Shias are roughly 10% of the world’s Muslim population, centered in and near contemporary Iran, with most of the rest of the Muslim world adhering to some form of Sunni Islam.  Saudi Arabia, predominantly Sunni but with a Shiite minority in its eastern region, is home to Islam’s two holiest sites, Mecca and Medina, and considers itself the world guardian of the Islamic faith (the Shia-Sunni schism dates back to differences which arose in 632 CE about who should succeed the Prophet Muhammad as leader of the new faith).

Ghattas traces the beginnings of Middle Eastern unraveling to the moment in 1979 when Iran startled the world by overthrowing the regime of Shah Reza Pahlavi and establishing the Islamic Republic of Iran, the triggering event for her narrative.  In February of that year, Ayatollah Ruhollah Khomeini, then 79 years old, emerged from exile as the new republic’s officially designated “Supreme Leader,” the guardian of the holy law of the prophet.  Shia Islam, not Islam generally, was officially declared the state religion.  The unraveling process is in large measure what Ghattas terms the “ripple effect” of the Iranian revolution.

From that starting point in 1979, Ghattas weaves together an intricate story that takes her well beyond the borders of Iran and Saudi Arabia, with chapters on Lebanon, Syria, Egypt, Iraq, Pakistan and Afghanistan, digging deeply into  how the Iranian-Saudi rivalry affected the internal politics of each (Ghattas frequently uses the term “Greater Middle East” to include Pakistan and Afghanistan).  She casts light on numerous familiar events and how they were shaped more than commonly realized by the Iranian-Saudi rivalry. She shows Iran making little effort to hide its interest in exporting its Islamic revolution, with the Saudis opposing Iran’s adventures and advancing their own goals more furtively.

Ghattas leavens the story’s necessary intricacy by introducing her readers to a host of activists and ordinary citizens fighting against sectarian intolerance and violence.  The people she features, from almost every country in the region and many walks of life, rarely know one another and most will not be known to Western readers (although assiduous readers of this blog will recognize at least two: Masha Alinejad, leader of the movement to give Iranian women the option of not wearing the veil; and Sharin Ebadi, Iranian civil rights lawyer — and judge prior to the Khomeini regime making women ineligible for judgeships — who won the Nobel Peace prize in 2003; both wrote memoirs reviewed here, Alinejad in December 2019 and Ebadi in October 2017).  Even if they do not know one another, these individuals are all “fighting the same battles” (p.3), Ghattas writes.  Amidst so much reason to despair, she finds her hope in them.  They constitute a “sample of a large majority that given the opportunity and the space will seize the occasion to rise against the forces of darkness that have impoverished the region” (p.334).

* * *

Prior to 1979, the Sunni-Shia schism lay mostly dormant throughout most of the Muslim world, even though clerics from each branch generally dismissed the other’s brand of Islam as heretical.  Further, up to that time, Saudi Arabia and Iran had been twin pillars in American policy to counter the spread of Soviet influence in the region.  But the revolution in Iran upended this equilibrium.  Once in power as Iran’s Supreme leader, Khomeini proceeded in the name of Islamic purity to crush ruthlessly all opposition.  “Revolt against God’s government is a revolt against God” (p.36),  he once proclaimed.  The dictatorship of the Shah was replaced by what Ghattas terms an “autocracy of the holy law” (p.36).   Khomeini, a man whom she describes as not just a “theocrat” but an “irredeemable monster” (p.37), served as Iran’s Supreme Leader until his death in 1989.

The Saudis initially greeted the regime change in Tehran with equanimity, only to be rebuffed by Khomeini, who saw the Saudi royal family, the al Sauds, as unworthy custodians of Mecca and Medina – mere “camel grazers” (p.168).  By year’s end, the Saudis were “determined to position themselves as the sole defenders of the Muslim faith, at all cost, and on every front, from education to politics, from culture to the battlefields” (p.82).   But the 1979 revolution in Iran was only the first of several events that shook up the Greater Middle East that year.  Two others stand out for Ghattas: Saudi religious zealots’ siege of the Holy Mosque in Mecca in November and the Soviet Union’s invasion of Afghanistan in December.

The rebels who seized the Mosque in Mecca, also in the name of Islamic purity, launched the first direct challenge to the al-Saud dynasty in its history.  Although labeled “deviants” in official Saudi circles, the rebels were well-groomed products of training received from homegrown clerics representing Wahhabism, the ultra-conservative, uncompromisingly puritanical Sunni interpretation of Islam.  With rumors circulating that the United States was behind the takeover — rumors that Khomeini encouraged – the Saudis at the behest of their American allies issued a tepid denial of foreign involvement.  Ghattas terms the denial a “cowardly effort to deflect attention for as long as possible from the kingdom’s own responsibility in creating the monster that had hijacked Islam’s holiest site” (p.64).  Obfuscation and feigning of ignorance became typical of Saudi responses in the years to come, a form of subterfuge designed to “evade responsibility for any violence or intolerance connected to the kingdom” (p.64).

After the embarrassment of the siege of Mecca, the Saudis saw Afghanistan as an “opportunity to rebuild their reputation as the champions of Islam against the godless communists” (p.84).   Afghanistan became the first battleground in modern times for jihad, now commonly thought of as Islamic holy war against infidels, Muslim and non-Muslim.  The cumulative effect of the Iranian revolution, the siege in Mecca and the Soviet invasion of Afghanistan during the same year was, Ghattas writes,“toxic, and nothing was ever the same again . . . Nothing has changed the Arab and Muslim worlds as deeply and fundamentally as [these three] events of 1979” (p.2).

 Since the early 1980s, the Iranian imprint has been most visible in Lebanon and Syria.  Lebanon became what amounted to an outpost of the Iranian Revolution, with Khomenei’s image and Iranian flags ubiquitously displayed. Syria, under Hafez al-Assad, its president from 1970 to 2000, was the first country to recognize Khomeini’s victory in 1979.  With an ally in Iran, Assad, a member of the obscure Alawite minority, a tenth-century offshoot of Shia Islam, saw the potential of an Iranian-Syrian axis as a tool to “scare and blackmail countries like Saudi Arabia” (p.86).  Today, Iran continues to support the regime of Assad’s son Bashar al-Assad in Syria’s seemingly interminable civil war.

Saudi Arabia’s early efforts to undermine Iran may have included providing Saddam Hussein with a green light in 1980 to launch the Iran-Iraq war, Ghattas suggests.  Hussein, who rarely traveled outside Iraq, took a 24-hour trip to Saudi Arabia to meet with Saudi King Khaled in August of 1980 and declared war on Iran the following month, a war that lasted eight years and turned out to be a gift to Khomeini, who used it astutely to “solidify his grip on the country in the face of an external enemy” (p.88).

Death for apostasy was introduced into Islam in 1988 when Khomeini issued a fatwa declaring Indian writer Salmon Rushdie’s novel The Satanic Verses blasphemous and calling upon Muslims to find and execute Rushdie.  But the revolt against the novel actually started with Saudi-funded demonstrations in northern England, led by clerics primarily of Saudi and Egyptian origin.  In Iran, the book had been translated into Persian and sold freely in Tehran.  It was only after watching demonstrations against the novel on television that Khomeini issued his fatwa.

Rushdie survived the fatwa, but his Japanese and Turkish translators, along with a Norwegian publisher, did not.  With no basis in the Quran, Khomeini’s fatwa represented what Ghattas terms a “strange twist in the competition between Iran and Saudi Arabia to position themselves as the standard-bearer of global Islam. But Saudi Arabia’s dubious contribution would be forgotten; the fatwa against Rushdie would become solely an Iranian story” (p.181).

Khomeini’s doctrine of death for apostasy reached new and savage heights in Pakistan in the late 1980s with the systematic killings of Shias by Sunnis, the “first premeditated, state-sponsored attack by one sectarian militia against another sect” (p.145).   The sectarian killings in Pakistan were the product of the “provocative zealotry” (p.145) of Pakistani president Zia ul-Haq, a staunch American ally who, with Saudi encouragement, had imposed a particularly severe version of Sharia law on his country during the fateful year 1979.  The sectarian violence, “born out of the seeds of the Iranian Revolution and its clash with Saudi Wahhabism,” marked not only the beginning of a proxy war in Pakistan between Khomeini and the House of Saud but also the “start of modern day Sunni-Shia sectarian violence” (p.146-47).

After the 2003 American invasion of Iraq and Saddam Hussein’s fall, Saudi Arabia predictably denied any responsibility for the Sunni-Shiite violence that erupted.  As Ghattas notes, the Saudis had plausible deniability.  Although the state was “not organizing anything,” individual Saudis were donating money to the cause, “just as they had during the Afghan war, and fiery preachers in the kingdom were not silenced even while they exhorted their brothers to fight infidels in Iraq” (p.233).   The sectarian violence in Iraq gave rise to the Islamic State in Iraq and Syria, ISIS, still a presence in Syria’s on-going civil war.  ISIS’ grisly executions and “bizarre, misguided obsession with breaking statues and shrines” (p.289) invited comparisons to Saudi Wahhabism.  Official Saudi Arabia vehemently rejected such comparisons.  But, Ghattas notes acidly, ISIS is inescapably “Saudi progeny.”  The Saudi kingdom “may not have directed the rise of this cult of fanatics,” she writes, but it had done “more than enough” to feed them (p.290).

Jamal Khashoggi, the murdered Saudi journalist and activist, was a personal friend of Ghattas and another of the individuals she features.  In a long and heart-breaking portrait, she includes what she has been able to learn about his gruesome end in 2018.  Her friend, she writes, underestimated both his own importance and “how brazen and evil”  (p.324) Saudi Crown Prince Muhammad Bin Sultan, MBS, and those around him had become, willing to punish anyone seen as a dissident or critic of the regime.  She also notes that MBS appears to be the most hardened anti-Iranian in power in Saudi Arabia in the post-1979 era.

With MBS seemingly on course to become Saudi king when his ailing father dies, the prospects for a diffusion of tension between Iran and Saudi Arabia could be written off as unrealistic.  Ghattas would like to see Saudi Arabia moderate its anti-Shia rhetoric within the kingdom and curb the influence it seeks outside its borders “in the form of money spent on mosques and teachings that hone close to the kingdom’s understanding of Islam.” (p.333).   In the end, however, defusing the “paranoid, vengeful insecurities” of Saudi Arabia and curtailing the “militant ardor of those [Saudis] who feel threatened by Iran’s expansionist designs” (p.333) is likely to require something akin to regime change in Iran.  Sharin Ebadi, the Iranian civil rights lawyer and Nobel Prize winner, has suggested that this could start with a constitutional change removing the position of Supreme Leader, a suggestion Ghattas appears to endorse.

* * *

Despite Ghattas’ hopes for the activists fighting against the darkness in today’s Greater Middle East, her kaleidoscopic yet dispiriting account demonstrates persuasively that there is no easy road to arresting an unraveling process propelled by religious intolerance and sectarian violence.

Thomas H. Peebles

La Châtaigneraie, France

May 10, 2021






Filed under Middle Eastern History, Religion

Converging Visions of Equality


Peniel E. Joseph, The Sword and the Shield:

The Revolutionary Lives of Malcolm X and Martin Luther King, Jr. (Basic Books)

[NOTE: A version of this review has been posted to the Tocqueville 21 blog:  Tocqueville 21 takes its name from the 19th century French aristocrat who gave Americans much insight into their democracy.  It seeks to encourage in-depth thinking about democratic theory and practice, with particular but by no means exclusive emphasis on the United States and France.  The site is maintained in connection with the American University of Paris’ Tocqueville Review and its Center for Critical Democracy Studies].

Martin Luther King, Jr., and Malcolm X met only once, a chance encounter at the US Capitol on March 26, 1964.  The two men were at the Capitol to listen to a debate over what would become the Civil Rights Act of 1964, a measure that banned discrimination in employment, mandated equal access to most public facilities, and had the potential to be the most consequential piece of federal legislation on behalf of equality for African-Americans since the Reconstruction era nearly a century earlier.  There wasn’t much substance to the encounter. “Well, Malcolm, good to see you,” King said.  “Good to see you,” Malcolm responded. There may have been some additional light chitchat, but not much more.  Fortunately, photographers were present, and we are the beneficiaries of several iconic photos of the encounter.

That encounter at the Capitol constitutes the starting point for Peniel Joseph’s enthralling The Sword and the Shield: The Revolutionary Lives of Malcolm X and Martin Luther King, a work that has some of the indicia of a dual biography, albeit highly condensed.  But Joseph, a professor at the University of Texas at Austin who has written prolifically on modern African American history, places his emphasis on the two men’s intellectual journeys.  Drawing heavily from their speeches, writings and public debates, Joseph challenges the conventional view of the two men as polar opposites who represented competing visions of full equality for African Americans.  The conventional view misses the nuances and evolution of both men’s thinking, Joseph argues, obscuring the ways their politics and activism came to overlap.  Each plainly influenced the other.  “Over time, each persuaded the other to become more like himself” (p.13).

My final stages of this review on the convergence of the two men’s thinking coincided with the trial of Derek Chauvin for the killing of George Floyd last May, along with the recent killing of still another black man, Daunte Wright, in the same Minneapolis metropolitan area.  Watching and reading about events in Minneapolis, I couldn’t help concluding that the three familiar words “Black Lives Matter”  –  the movement that led demonstrations across the country and the world last year to protest the Floyd killing — also neatly encapsulate the commonalities that Joseph identifies in The Sword and the Shield.

* * *

In March 1964, King was considered the “single most influential civil rights leader in the nation” (p.2), Joseph writes, whereas Malcolm, an outlier in the mainstream civil rights movement, was “perhaps the most vocal critic of white supremacy ever produced by black America” (p.4).    The two men shared extraordinary rhetorical and organizational skills.  Each was a charismatic leader and deep thinker who articulated in galvanizing terms his vision of full equality for African Americans.  But these visions sometimes appeared to be not just polar opposites but mutually exclusive.

In the conventional view of the time, King, the Southern Baptist preacher with a Ph.D. in theology, deserved mainstream America’s support as the civil rights leader who sought integration of African Americans into the larger white society, and unfailingly advocated non-violence as the most effective means to that end.  White liberals held King in high esteem for his almost religious belief in the potential of the American political system to close the gap between its lofty democratic rhetoric and the reality of pervasive racial segregation, discrimination and second-class citizenship, a belief Malcolm considered naïve.

A high school dropout who had served time in jail, Malcolm became the most visible spokesman for the Nation of Islam (NOI), an idiosyncratic American religious organization that preached black empowerment and racial segregation.  Often termed a “black nationalist,” Malcolm found the key to full equality in political and economic empowerment of African American communities.  He considered racial integration a fool’s errand and left open the possibility of violence as a means of defending against white inflicted violence.  He seemed to embrace some form of racial separation as the most effective means to achieve full equality and improve the lives of black Americans – a position that the media found to be ironically similar to that of the hard-core racial segregationists with whom both he and King were battling.

But Joseph demonstrates that Malcolm was moving in King’s direction at the time of their March 1964 encounter.  Coming off a bitter fallout with the NOI and its leader, Elijah Muhammad, he had cut his ties with the organization just months before the encounter.  He had traveled to Washington to demonstrate his support for the civil rights legislation under consideration.  Thinking he could make a contribution to the mainstream civil rights movement, Malcolm sought an alliance with King and his allies.  Although that alliance never materialized, King began to embrace positions identified with Malcolm after the latter’s assassination less than 11 months later, stressing in particular that economic justice needed to be a component of full equality for African Americans.  King also became an outspoken opponent of American involvement in the war in Vietnam, of which Malcolm long been had critical.

Singular events had thrust both men onto the national stage.  King rose to prominence as a newly-ordained minister who at age 26 became the most audible voice of the 1955-56 Montgomery, Alabama, bus boycott, after Rosa Parks famously refused to give up her seat on a public bus to a white person.  Malcolm’s rise to fame came in 1959 through a nationally televised 5-part CBS documentary on the NOI, The Hate that Hate Produced, hosted by then little-known Mike Wallace.  The documentary was an immediate sensation.  It was a one-sided indictment of the NOI, Joseph indicates, intended to scare and outrage whites.  But it made Malcolm and his NOI boss Elijah Muhammad heroes within black communities across the country.  King seemed to buy into the documentary’s theme, describing the NOI as an organization dedicated to “black supremacy,” which he considered “as bad as white supremacy” (p.85).

But even at this time, each man had connected his US-based activism to anti-colonial movements that were altering the face of Africa and Asia.  Both recognized that the systemic nature of racial oppression “transcended boundaries of nation-states” (p.73).    Malcolm made his first trip abroad in 1959, to Egypt and Nigeria.  The trip helped him “internationalize black political radicalism,” by linking domestic black politics to the “larger world of anti-colonial and Third World liberation movements” (p.18-19), as Joseph puts it.  King, whose philosophy of non-violence owed much to Mahatmas Gandhi, visited India in 1959, characterizing himself as a “‘pilgrim’ coming to pay homage to a nation liberated from colonial oppression against seemingly insurmountable odds”  (p.80).   After the visit, he “proudly claimed the Third World as an integral part of a worldwide social justice movement” (p.80).

After his break with the NOI and just after his chance encounter with King at the US Capitol, Malcolm took a transformative five-week tour of Africa and the Middle East in the spring of 1964.  The tour put him on the path to becoming a conventional Muslim and prompted him to back away from anti-white views he had expressed while with the NOI.  In Mecca, Saudi Arabia, he professed to see “sincere and true brotherhood practiced by all colors together, irrespective of their color.” (p.188).   He went on to Nigeria and “dreamed of becoming the leader of a political revolution steeped in the anti-colonial fervor sweeping Africa” (p.191).  Malcolm’s time in Africa, Joseph concludes, “changed his mind, body, and soul . . . The African continent intoxicated Malcolm X and informed his political dreams” (p.192-93).

By the time of their March 1964 meeting, moreover, the two men had begun to recognize each other’s potential.  After over a decade of forcefully criticizing the mainstream civil rights movement, Malcolm now recognized King’s goals as his own but chose different methods to get there.  Malcolm also had a subtle effect on King.  The “more he ridiculed and challenged King publicly,” Joseph writes, the more King “reaffirmed the strength of non-violence as a weapon of peace capable of transforming American democracy” (p.155).  King for his part had begun to look outside the rigidly segregated South and toward major urban centers in the North, Malcolm’s bailiwick, as possible sites of protest that would expand the freedom struggle beyond its southern roots.

Joseph cites three instances in which Malcolm extended written invitations to King, all of which went unanswered. But in early February 1965, after Malcolm had participated in a panel discussion with King’s wife, King concluded that the time had come to meet with his formidable peer.  Later that month, alas, Malcolm was gunned down in New York, almost certainly the work of the NOI, although details of the assassination remain murky to this day.

In the three years remaining to him after Malcolm’s assassination, King borrowed liberally from the black nationalist’s playbook, embracing in particular the notion of economic justice as a necessary component of full equality for African Americans.  Although he never wavered in his commitment to non-violence, King saw his cause differently after the uprising in the Watts section of Los Angeles in the summer of 1965.  Watts “transformed King,” Joseph writes, making clear that civil unrest in Northern cities was a “product of institutional racism and poverty that required far more material and political resources than ever imagined by the architects of the Great Society” (p.235).  King also began to speak out publicly in 1965 against the escalation of America’s military commitment in Vietnam, marking the beginning of the end of his close relationship with President Johnson.

King delivered his most pointed criticism of the war on April 4, 1967, precisely one year prior to his assassination, at the Riverside Church in New York City, abutting Harlem, Malcolm’s home base.  Linking the war to the prevalence of racism and poverty in the United States, King lamented the “cruel irony of watching Negro and white boys on TV screens as they kill and die together for a nation that has been unable to seat them together in the same schools.” (p.267).  Joseph terms King’s Riverside Church address the “boldest political decision of his career” (p.268).  It was the final turning point for King, marking his formal break with mainstream politics and his “full transition” from a civil rights leader to a “political revolutionary” who “refused to remain quiet in the face of domestic and international crises” (p.268).

After Riverside, in his last year, King became what Joseph describes as America’s “most well-known anti-war activist” (p.271).  King lent a Nobel Prize-winner’s prestige to a peace movement struggling to find its voice at a time when most Americans still supported the war.  Simultaneously, he pushed for federally guaranteed income, decent and racially integrated housing and public schools — what he termed a “revolution of values” (p.287).  During this period, Stokely Carmichael, who once worked with King in Mississippi (and is the subject of a Joseph biography), coined the term “Black Power.”  In Joseph’s view, the Black Power movement represented the natural extension of Malcolm’s political philosophy, post-Malcolm. Although King frequently criticized the movement in his final years, he nonetheless found himself in agreement with much of its agenda.

In his final months. King supported a Poor People’s march on Washington, D.C.  He was in Memphis, Tennessee in April 1968 on behalf of striking sanitation workers, overwhelmingly African-American, who held jobs but were seeking better salaries and more humane working conditions, when he too was felled by an assassin’s bullet.

* * *

After reading Joseph’s masterful synthesis, it is easy to imagine Malcolm supporting King’s efforts in Memphis that April.  And if the two men were still with us today, it is it is equally easy to imagine both embracing warmly the “Black Lives Matter” movement.


Thomas H. Peebles

La Châtaigneraie, France

April 20, 2021





Filed under American Politics, American Society, Political Theory, United States History

Digging Deeply Into The Idea of Democracy


James Miller, Can Democracy Work:

A Short History of a Radical Idea, From Ancient Athens to Our World

(Farrar, Strauss & Co.,) 


William Davies, Nervous States:

Democracy and the Decline of Reason

(WW Norton & Co.)

[NOTE: A condensed version of this review has also been posted to a blog known as Tocqueville 21: https:/  Taking its name from the 19th century French aristocrat who gave Americans much insight into their democracy, Tocqueville 21 seeks to encourage in-depth thinking about democratic theory and practice, with particular but by no means exclusive emphasis on the United States and France.  The sight is maintained in connection with the American University of Paris’ Tocqueville Review and its Center for Critical Democracy Studies.  I anticipate regular postings on Tocqueville 21 going forward.]

Did American democracy survive the presidency of Donald Trump?  Variants on this question, never far from the surface during that four-year presidency, took on terrifying immediacy in the wake of the assault on the US Capitol this past January. The question seems sure to occupy historians, commentators and the public during the administration of Joe Biden and beyond.  If nothing else, the Trump presidency and now its aftermath bring home the need to dig deeply into the very idea of democracy, looking more closely at its history, theory, practice, and limitations, asking what are its core principles and what it takes to sustain them.  But we might shorten the inquiry to a single, pragmatic question: can democracy work?

This happens to be the title of James Miller’s Can Democracy Work: A Short History of a Radical Idea, From Ancient Athens to Our World.  But it could also be the title of William Davies’ Nervous States: Democracy and the Decline of Reason. The two works, both written during the Trump presidency, fall short of providing definitive or even reassuring answers to the question that Miller, professor of politics and liberal studies at New York’s New School for Social Research, has taken for his title.  But each casts enriching yet altogether different light on democratic theory and practice.

Miller’s approach is for the most part historical. Through a series of selected – and by his own admission “Eurocentric” (M.12) — case studies, he explores how the term “democracy” has evolved over the centuries, beginning with ancient Athens.  The approach of Davies, a political economist at Goldsmiths, University of London, is more difficult to categorize, but might be described as philosophical.  It is grounded in the legacy of 17th century philosophers René Descartes and Thomas Hobbes, his departure point for a complex and not always easy to follow explanation of the roots of modern populism, that combustible mixture of nostalgia, resentment, anger and fear that seemed to have triumphed at the time of the 2016 Brexit vote in Great Britain and the election of Donald Trump in the United States later that year.  Davies is most concerned about two manifestations of the “decline of reason,” his subtitle: the present day lack of confidence and trust in experts and democratically elected representatives; and the role of emotion and fear in contemporary politics.

Miller frames his historical overview with a paradox: despite blatant anti-democratic tendencies across the globe, a generalized notion of democracy as the most desirable form of government retains a strong hold on much, maybe most, of the world’s population.  From Myanmar and Hong Kong to the throng that invaded the US Capitol in January, nearly every public demonstration against the status quo utilizes the language of democracy.  Almost all the world’s political regimes, from the United States to North Korea, claim to embody some form of democracy.  “As imperfect as all the world’s systems are that claim to be democratic,” Miller writes, in today’s world the ideal of democracy is “more universally honored than ever before in human history” (M.211).

But the near-universal adhesion to this ideal is relatively recent, dating largely from the period since World War II, when the concept of democracy came to embrace self-determination of populations that previously had lived under foreign domination.  Throughout most of history, democracy was associated with the danger of mob rule, often seen as a “virtual synonym for violent anarchy” (M.59).   Modern democracy in Miller’s interpretation begins with the 18thcentury French and American Revolutions.  Revolts against the status quo are the heart of modern democracy, he contends.  They are not simply blemishes on the “peaceful forward march toward a more just society” (M.10).  Since the early 19th century, representative government, where voters elect their leaders  — “indirect democracy” – has come to be considered the only practical form of democratic governance for populous nation-states.

* * *

But in 5th and 4th century BCE Athens, where Miller’s case studies begin, what we now term direct democracy prevailed.  More than any modern democracy, a community of near absolute equality existed among Athenian citizens, even though citizenship was tightly restricted, open only to a fraction of the adult male population.  Many of Athens’ rivals, governed by oligarchs and aristocrats, considered the direct democracy practiced in Athens as a formula for mob rule, a view that persisted throughout the intervening centuries.  By the late 18th century, however, a competing view had emerged in France that some sort of democratic rule could serve as a check on monarchy and aristocracy.

In revolutionary Paris in early 1793, in the midst of the bloodiest phase of the French Revolution, the Marquis de Condorcet led the drafting of a proposed constitution that Miller considers the most purely democratic instrument of the 18th century and maybe of the two centuries since.  Condorcet’s draft constitution envisioned a wide network of local assemblies in which any citizen could propose legislation.  Although not implemented, the thinking behind Condorcet’s draft gave impetus to the notion of representative government as a system “preferable to, and a necessary check on, the unruly excesses of a purely direct democracy” (p.M.86).

The debate in the early 19th century centered on suffrage, the question of who gets to vote, with democracy proponents pushing to remove or lesson property requirements for extending the franchise to ever-wider segments of the (male) adult population.  A cluster of additional institutions and practices came to be considered essential to buttress an extended franchise, among them free and fair elections, protection of the human rights of all citizens, and adherence to the rule of law.  But Miller’s 19th century case studies are instances of short term set backs for the democratic cause: the failure of the massive popular movement known as Chartism to extend the franchise significantly in Britain in the 1840s; and the 1848 uprisings across the European continent, at once nationalist and democratic, which sought representative political institutions and something akin to universal male suffrage, but failed everywhere but in France to extend the franchise.

In the second half of the 19th century, moreover, proponents of democracy found themselves confronting issues of economic freedom and social justice in a rapidly industrializing Europe.  Karl Marx, for one, whose Communist Manifesto was published in 1848, doubted whether democracy – “bourgeois democracy,” he termed it – could alleviate widespread urban poverty and the exploitation of workers.  But the most spectacular failure among Miller’s case studies was the Paris Commune of 1871, which collapsed into disastrous violence amidst tensions between economic and political freedom.  Ironically, the fear of violence that the Commune unleashed led to a series of democratizing political reforms throughout Europe, with the right to vote extended to more male citizens.  The organization of workers into unions and the rise of political parties complemented extension of the franchise and contributed to the process of democratization in late 19th and early 20th century Europe.

In the United States, a case apart in Miller’s case studies, a genuinely democratic culture had taken hold by the 1830s, as the young French aristocrat Alexis de Tocqueville recognized during his famous 1831-32 tour, ostensibly to study prison conditions.  As early as the 1790s, there was a tendency to use the terms “republic” and “democracy” as synonyms for the American constitutional system, even though none of the drafters of the 1787 Constitution thought of himself as a democrat.  James Madison derided what he termed pure democracies, “which have ever been spectacles of turbulence and contention” (M.99).  The constitution’s drafters envisioned a representative government in which voters would select a “natural aristocracy,” as John Adams put it, comprising “men of virtue and talent, who would govern on behalf of all, with a dispassionate regard for the common good” (M.92).

The notion of a natural aristocracy all but disappeared when Andrew Jackson split Thomas Jefferson’s Democratic-Republican Party’s in two in his successful run for the presidency in 1828.  Running as a “Democrat,” Jackson confirmed that “democracy” from that point forward would be an “unambiguously honorific term in the American political lexicon” (M. 110), Miller writes.  It was during Jackson’s presidency that Tocqueville arrived in the United States.

Aware of how the institution of slavery undermined America’s democratic pretensions, Tocqueville nonetheless saw in the restlessness of Jacksonian America what Miller describes as a “new kind of society, in which the principle of equality was pushed to its limits” (M.115).  As practiced in America, democracy was a “way of life, and a shared faith, instantiated in other forms of association, in modes of thought and belief, in the attitudes and inclinations of individuals who have absorbed a kind of democratic temperament” (M.7).  Tocqueville nonetheless seemed to have had the Jacksonian style of democracy in mind when he warned against what he called “democratic despotism,” where a majority could override the rights and liberties of minorities.

Woodrow Wilson’s plea in 1917 to the US Congress that the United States enter World War I to “make the world safe for democracy” constitutes the beginning of the 20thcentury idea of democracy as a universal value, Miller argues.  But Wilson’s soaring faith in democracy turned out to be “astonishingly parochial” (M.176).  The post-World War I peace conferences in 1919 left intact the colonies of Britain and France, “under the pretext that the nonwhite races needed more time to become fully mature peoples, fit for democratic institutions” (M.190-91).

The Covenant of the League of Nations, the organization that Wilson hoped would be instrumental in preventing future conflict, “encouraged an expectation of self-determination as a new and universal political right” (M.191), even as the isolationist Congress thwarted Wilson’s plan for United States membership in the League.  For countries living under colonial domination, the expectation of self-determination was heightened after the more murderous World War II, particularly through the 1948 United Nations’ Universal Declaration of Human Rights.  Although a text without enforcement mechanisms, the declaration helped inspire human rights and independence movements across the globe.

Miller finishes by explaining why he remains attracted to modern attempts at direct democracy, resembling in some senses those of ancient Athens, particularly the notion of “participatory democracy” which influenced him as a young 1960s radical and which he saw replicated in the Occupy Wall Street Movement of ten years ago.  But direct democracy, he winds up concluding, is no more viable today than it was at the time of the French Revolution. It is not possible to create a workable participatory democracy model in a large, complex society.  Any “serious effort to implement such a structure will require a delegation of authority and the selection of representatives – in short the creation of an indirect democracy, and at some distance from most participants”  (M.232-33).

The Trump presidency, Miller argues, is best considered “not as a protest against modern democracy per se, but against the limits of modern democracy” (M.239).  Like Brexit, it expressed, in an “inchoate and potentially self-defeating” manner, a desire for “more democracy, for a larger voice for ordinary people” (M.240) – not unlike the participatory democracy campaigns of the 1960s.  At the time of Trump’s January 2017 inauguration, Miller appreciated that he remained free to “protest a political leader whose character and public policies I found repugnant.”  But he realized that he was “also expected to acknowledge, and peacefully coexist with, compatriots who preferred Trump’s policies and personal style.  This is a part of what it means to be a citizen in a liberal democracy” (M.240)  —  a portentous observation in light of the January 2021 assault on the US Capitol.

Democracies, Miller concludes, need to “explore new ways to foster a tolerant ethos that accepts, and can acknowledge, that there are many incompatible forms of life and forms of politics, not always directly democratic or participatory, in which humans can flourish” (M.234).  Although he doesn’t say so explicitly, this sounds much like an acknowledgement that present day populism is here to stay.  By an altogether different route, Davies reaches roughly the same conclusion.

* * *

Davies is far from the first to highlight the challenges to democracy when voters appear to abandon reason for emotion; nor the first to try to explain why the claims of government experts and elected representatives are met with increased suspicion and diminished trust today.  But he may be the first to tie these manifestations of the “decline of reason” to the disintegration of binary philosophical distinctions that Descartes and Hobbes established in the 17thcentury — Descartes between mind and body, Hobbes between war and peace.

For Descartes, the mind existed independently of the body.  Descartes was obsessed by the question whether what we see, hear, or smell is actually real.  He “treated physical sensations with great suspicion, in contrast to the rational principles belonging to the mind” (D.xiii).  Descartes gave shape to the modern philosophical definition of a rational scientific mind, Davies argues, but to do so, he had to discount sensations and feelings.  Hobbes, exhausted by the protracted religious Thirty Years War on the European continent and civil wars in England, argued that the central purpose of the state was to “eradicate feelings of mutual fear that would otherwise trigger violence” (D.xiii).  If people don’t feel safe, Hobbes seemed to contend, it “doesn’t matter whether they are objectively safe or not; they will eventually start to take matters into their own hands” (D.xvi).

Davies shows how Descartes and Hobbes helped create the conceptual foundation for the modern administrative state, fashioned by merchants who introduced “strict new rules for how their impressions should be recorded and spoke of, to avoid exaggeration and distortion, using numbers and public record-keeping” (D.xiii), not least for more efficient tax collection.  Using numbers in this pragmatic way, these 17th century merchants were the forerunners of what we today call experts, especially in the disciplines of statistics and economics, with an ability to “keep personal feelings separate from their observations” (D.xiii).

The conclusions of such experts, denominated and accepted as “facts,” established the value of objectivity in public life, providing a basis for consensus among people who otherwise have little in common.  Facts provided by economists, statisticians, and scientists thus have what for Hobbes was a peace-building function; they are “akin to contracts, types of promises that experts make to each other and the public, that records are accurate and free from any personal bias or political agenda” (D.124), Davies explains.  But if democracy is to provide effective mechanisms for the resolution of disputes and disagreements, there must be “some commonly agreed starting point, that all are willing to recognize,” he warns. “Some things must be outside politics, if peaceful political disputes are to be possible” (D.62).

Davies makes the bold argument that the rise of emotion in contemporary politics and the inability of experts and facts to settle disputes today are the consequences of the break down of the binary distinctions of Descartes and Hobbes.  The brain, through rapid advances in neuroscience, rather than Descartes’ concept of mind, has become the main way we have come to understand ourselves, demonstrating the “importance of emotion and physiology to all decision making” (D.xii).  The distinction between war and peace has also become less clear-cut since Hobbes’ time.

Davies is concerned particularly with how the type of knowledge used in warfare has been coopted for political purposes. Warfare knowledge doesn’t have the luxury of “slow, reasonable open public debate of the sort that scientific progress has been built upon.”  It is “shrouded in secrecy, accompanied by deliberate attempts to deceive the enemy. It has to be delivered at the right place and right time” (D.124), with emotions playing a crucial role.  Military knowledge is thus weaponized knowledge.  Political propaganda has all the indicia of military knowledge at work for political advantage.  But so does much of today’s digital communication.  Political argument conducted online “has come to feel more like conflict” (D.193), Davies observes, with conspiracy theories in particular given wide room to flourish.

The upshot is that democracies are being transformed today by the power of feeling and emotion, in “ways that cannot be ignored or reversed” (D. xvii-xviii).  Objective claims about the economy, society, the human body and nature “can no longer be successfully insulated from emotions”  (D.xiv).  While we can lament the decline of modern reason, “as if emotions have overwhelmed the citadel of truth like barbarians” (D.xv), Davies suggests that we would do better to “value democracy’s capacity to give voice to fear, pain and anxiety that might otherwise be diverted in far more destructive directions”  (D.xvii).

Yet Davies leaves unanswered the question whether there are there limits on the forms of fear, pain and anxiety to which democracy should give voice.  He recognizes the potency of nationalism as a “way of understanding the life of society in mythical terms” (D.87).  But should democracy strive to give voice to nationalism’s most xenophobic and exclusionary forms?  Nowhere does he address racism which, most social scientists now agree, was a stronger contributing factor to the 2016 election of Donald Trump than economic disparity, and it is difficult to articulate any rationale for giving racism a voice in a modern democracy.

In countering climate change skepticism, a primary example of popular mistrust of expert opinion and scientific consensus, Davies rejects renewed commitment to scientific expertise and rational argument – “bravado rationalism,” he calls it  — as insufficient to overcome the “liars and manipulators” (D.108) who cast doubt on the reality of climate change.  But he doesn’t spell out what would be sufficient. The book went to press prior to the outbreak of the Coronavirus pandemic.  Were Davies writing today, he likely would have addressed similar resistance to expert claims about fighting the pandemic, such as the efficacy of wearing masks.

Writing today, moreover, Davies might have used an expression other than “barbarians storming the citadel of truth,” an expression that now brings to mind last January’s assault on the US Capitol.  While those who took part in the assault itself can be dealt with through the criminal justice process, with all the due process protections that a democracy affords accused law breakers, an astounding number of Americans who did not participate remain convinced that, despite overwhelming empirical evidence to the contrary, Joe Biden and the Democrats “stole” the 2020 presidential election from Donald Trump.

* * *

How can a democracy work when there is widespread disagreement with an incontrovertible fact, especially one that goes to democracy’s very heart, in this case the result of the vote and the peaceful transfer of power after an orderly election?  What if a massive number of citizens refuse to accept the obligation that Miller felt when his candidate lost in 2016, to acknowledge and peacefully coexist with the winning side?  Davies’ trenchant but quirky analysis provides no obvious solution to this quandary.  If we can find one, it will constitute an important step in answering the broader question whether American democracy survived the Trump presidency.


Thomas H. Peebles

La Châtaigneraie, France

March 17, 2021



Filed under American Politics, History, Intellectual History, Political Theory, United States History

What Did Chuckie Know?


Jack Goldsmith, In Hoffa’s Shadow:

A Stepfather, a Disappearance in Detroit, and My Search for Truth

(Farrar, Strauss & Giroux)

Until his conviction on jury tampering charges and subsequent imprisonment in 1967, James P. Hoffa had been head of one of American’s most powerful unions, the International Brotherhood of Teamsters, America’s truck drivers.  On July 30, 1975, Hoffa got into a car in a parking lot of a suburban Detroit restaurant and was never seen again.  To this day, and after more than three decades of investigation by top American law enforcement agencies, led by the FBI, we still do not know precisely what happened to Hoffa or who was responsible for his disappearance.  His body has never been recovered.  But for the better part of three decades, a prime suspect as an accomplice – someone likely to know most of the details behind the disappearance – was one Charles Lenton O’Brien, usually referred to as “Chuckie.”

Chuckie was suspected of having been the driver of the car that picked up Hoffa in the Detroit parking lot and drove him to his death. Prior to the union leader’s jail sentence,  Chuckie had been Hoffa’s assistant and since age seven had been exceptionally close to Hoffa personally, to the point that he considered Hoffa to be his stepfather.  Although his responsibilities to Hoffa varied  — he was skilled at hard knuckle “negotiations” with strikebreakers, for example — Chuckie was often perceived as the union leader’s chauffeur, since he drove Hoffa on much of his official business.  In the FBI’s view, Chuckie was the only person with whom Hoffa would have voluntarily gotten into a car.

Moreover, Chuckie had been in the same parking lot on the morning of the disappearance.  He had delivered a load of fresh salmon that afternoon from the Teamsters’ headquarters in downtown Detroit to the home of a Teamsters official who lived not far from the restaurant.  The car he used to deliver the salmon belonged to Joey Giacalone, the son of Detroit Mafia boss Anthony (“Tony”) Giacalone, the man Hoffa thought he was to meet with on that July afternoon and a leading suspect in the case, along with East Coast underworld figure Anthony Provenzano.  The FBI later detected Hoffa’s hair and scent in the car.   On the afternoon of the 30th,  Chuckie was not seen by anyone else in the crucial period  that encompassed the disappearance.  Further, he was known to have had a falling out with his former boss the previous year.

Despite this powerful circumstantial evidence, Chuckie was never indicted and adamantly maintained his innocence, up to his own death last winter at age 86.  Months before he died, Chuckie received high-powered support for his claim of innocence in the form of Harvard Law School professor Jack Goldsmith’s In Hoffa’s Shadow: A Stepfather, A Disappearance in Detroit, and My Search for Truth,which painstakingly seeks to explain how and why the FBI, the US Department of Justice and American law enforcement got the Hoffa case so wrong for so long by focusing on Chuckie when they should have set their sights elsewhere.  Goldsmith would seem to be a stellar candidate to argue on behalf of Chuckie.

Prior to moving to Harvard in 2004, Goldsmith served as head of the Department of Justice’s prestigious and powerful Office of Legal Counsel, an office that arbitrates legal issues involving power and authority within the executive branch of the United States government.  Goldsmith’s predecessors in that position include former Supreme Court justices William Rehnquist and Antonin Scalia.  Goldsmith came to OLC in 2003 in the immediate aftermath of the United States’ invasion of Iraq earlier that year.  In a short but controversial tenure at OLC of about 9 months, Goldsmith dealt with some of the most contentious legal issues generated by the United States’ post 9/11 “war on terror.”  He and then Deputy Attorney General James Comey famously confronted personnel from the White House in March 2004 at the hospital bed of a seriously ill Attorney General John Ashcroft and talked Ashcroft out of renewing a secret government surveillance program that Goldsmith and Comey had concluded was inconsistent with governing law.

But Goldsmith is more than just a high-powered legal beagle and Chuckie’s de facto lawyer.  He is also Chuckie’s stepson.  The word “stepfather” in Goldsmith’s title could apply loosely to Chuckie’s relationship to Hoffa, but applies in the strict legal sense to Goldsmith’s relationship to Chuckie, who stepped into Goldsmith’s life “seemingly from nowhere” (p.15) when he married Goldsmith’s mother Brenda in 1974, after Brenda’s first two marriages failed.  Brenda’s first husband, Goldsmith’s father, left her early in life, just as Chuckie’s father had left him and his mother at about the same age.  Brenda then married a doctor, but that marriage did not work out either.   Chuckie and Brenda married in Memphis in June 1975, one month prior to Hoffa’s disappearance.  Brenda had three young sons at the time – author Goldsmith was twelve years old, and his brothers Brett and Steven were 9 and 7 respectively.

While Goldsmith’s core purpose is to make the case for Chuckie’s innocence in the Hoffa disappearance, his book is in no small part also a heartfelt personal memoir about his relationship to Chuckie, a loving stepfather yet a man Goldsmith describes as a “hapless blabbermouth with famously terrible judgment” and an “uneducated serial lawbreaker” (p.9) who served two prison terms.  The largest segment of the book details Chuckie’s own life story, intertwined with that of Hoffa and the “complex legacy” that Hoffa “bequeathed to the American labor movement and American justice” (p.9).  But the juiciest segment comes at the end, as Goldsmith probes his stepfather to reveal more of what he knew about Hoffa’s disappearance – the “My Search for the Truth” portion of the book’s subtitle.  Goldsmith skillfully weaves these disparate strands into an absorbing, stranger-than-fiction narrative; you can’t make this stuff up.

* * *

Chuckie was seven when his father abruptly left him and his mother, Sylvia Pagano.  Sylvia, whose husband and father both had links to organized crime in Kansas City, moved in the early 1940s from Kansas City to Detroit with her young son. There she met Hoffa, an energetic and fast rising organizational dynamo with Detroit’s Teamster Local 299.  They became personal friends, although Goldsmith rejects the notion that the relationship was ever anything more.  As Hoffa rose within the Teamsters, he took Sylvia’s son under his wing, serving as a surrogate father to young Chuckie.  From the time Hoffa met Chuckie as a boy and continuing into his adult years, he showed Chuckie “solicitude, patience, and affection that he showed no one else in his life except for his daughter, Barbara” (p.82).  Chuckie in turn “loved Jimmy Hoffa more than anyone and would do anything he asked” (p.83).

Sylvia, who is often credited with convincing Hoffa he could do business with organized crime, was also a friend of Anthony Giacalone.  Like Hoffa, Giacalone too often stepped in to help Sylvia with her parenting responsibilities by taking young Chuckie under his wing, to the point that Chuckie referred to Giacalone as “Uncle Tony.”  If Hoffa was Chuckie’s stepfather, Chuckie later told his own stepson, Uncle Tony was his godfather.  From Hoffa and Giacalone, the two most significant male figures in his life as a boy and young man, Chuckie’s absorbed the value of what the Sicilians call Omertà

Omertà in the organized crime world is a code of silence, an ability to recognize and keep quiet about those matters that, in the Mafia euphemism, “shouldn’t be talked about.”  Chuckie learned early in life to avoid being a “rat,” someone who did not respect the code of silence.  But Omertà is little more than  an extreme form of the loyalty that was the “core commitment” (p.249 ) in his stepfather’s life as he grew from a  neglected little boy into a young man.  Chuckie, Goldsmith writes, “yearned for affection and sought it by loving those he cared for with intense fidelity and by doing his all to please them” —  the same qualities that made Chuckie “such a great father to me decades later” (p.83).

Goldsmith recalls being much happier, even joyful, after Chcukie’s arrival in his boyhood home, having never previously experienced fatherly attention.  His new stepfather “smothered me in love that he never received from his father, and taught me right from wrong even though he had trouble distinguishing the two in his own life” (p.5).  Chuckie was involved in Goldsmith’s youth sports and almost all his other activities, except homework – that was not his thing.   In 1976, Chuckie formally adopted Brenda’s boys, who changed their last name to “O’Brien.”  Chuckie joined the Goldsmith family about nine months prior to Hoffa’s disappearance, when Goldsmith was 14 years old.  During Goldsmith’s adolescence, his stepfather was fighting for his exoneration – and in some senses for his life — as a prime suspect in his former boss’ disappearance.

While Chuckie was trying to establish his innocence, Goldsmith went off to college and began to see his stepfather differently.  When he was a first year student at Yale Law School, Goldsmith decided that he wished to change his legal name from O’Brien back to Goldsmith.  It was, Goldsmith explains, part of an effort to “cut Chuckie out of my life completely.” Chuckie had done “nothing affirmatively to hurt me, and indeed had only ever shown me love.  But ambition augmented by feelings of moral superiority blinded me to my true motives or to the effect of my action on him or my family” (p.31-32).  During his time at Yale, the height of the Reagan era,  Goldsmith became more conservative politically.  He took the side of business in labor disputes, and sided generally with the prosecution rather than criminal defendants. At that point, Goldsmith by his own description was  “entirely self-absorbed . . . focused on my [career] prospects, my girlfriend, and not much else” (p.30).

As his legal career took off over the course of the next two decades, Goldsmith had little contact with his stepfather.  It was not until late 2004, after Goldsmith himself had married, had a family of his own, and had left the Department of Justice for Harvard that he sought Chuckie’s forgiveness for the long estrangement.  Chuckie “accepted me back into his life without qualification, rancor, or drama,” Goldsmith writes.  He “acted as if those twenty years hadn’t happened” (p.41).

* * *

Chuckie began work with the Teamsters upon graduation from high school, with ambitions to become a union organizer like his putative stepfather. But he was continually passed over for the position he was most interested in, the leader of Detroit’s Teamster Local 299, the local Hoffa had headed before he became the Teamsters’ national president in 1957.  Although Chuckie studied Hoffa’s methods closely, he “didn’t grasp the finer points of labor organizing or union finances, he wasn’t a charismatic speaker, he often didn’t follow through on commitments, and he lacked good judgment” (p.80), Goldsmith writes.  He had a “knucklehead charm and undoubted goodwill, and most people liked him despite his shortcomings.  But when he tried to mimic Jimmy Hoffa, Chuckle often fell on his face” (p.80).   Hoffa was a “deadly serious man who suffered no fools and in labor matters surrounded himself with learned professionals”  (p.82). As much as he loved Chuckie, Hoffa was not prepared to allow him to rise in the union beyond his competence level.

Hoffa combined the “business sense of an industrial tycoon with the political instinct of a big city boss, and the showmanship of a vaudeville entertainer” (p.80), according to one account.  He “identified with struggling workers and possessed an angry intensity about righting power imbalances in the workplace” (p.51).  Hoffa earned the admiration of the union rank and file, some 400,000 truckers by the early 1960s, by securing better hours and equipment for union members and winning impressive health, pension, and vacation benefits —  in essence winning a place for his members in the middle class, giving them what Chuckie described as a “dignity they never imagined possible” (p.75).

But Hoffa found himself in frequent conflict with local and regional leaders.  In part to secure needed support from several East Coast locals that were already controlled by organized crime, Hoffa lent liberally from union pension funds to organized crime figures.  Hoffa was “remarkably candid in defense of these arrangements,” Goldsmith learned from Chuckie.  He “always claimed he was simply adapting his labor goals to the power reality on the ground” (p.88).  Among Chuckie’s multiple duties as Hoffa’s assistant, he often served as a conduit between his boss and Mafia figures — the “union side” and the “Sicilian side,” (p.140), as Chuckie put it.

Hoffa’s colossal downfall began in Washington in 1957, when Robert F. Kennedy, then an ambitious United States Senate staff member for what was known as the McClellan Committee, zealously sought to expose links between organized labor and organized crime in contentious and highly publicized hearings. When Kennedy became Attorney General in his brother John’s administration in 1961, his Department of Justice targeted Hoffa for prosecution, a continuation of what Goldsmith terms a personal vendetta “probably without parallel in American history” (p.98).

Kennedy’s single-minded focus on Hoffa constitutes the “paradigmatic case in American history of wielding prosecutorial power to destroy a person rather than pursue a crime” (p.120), as Goldsmith puts it.  Kennedy’s seven year assault on Hoffa was “more responsible than has been appreciated for the steady decline in union power ever since” (p.9).  The identification of the American labor movement with corruption, violence, and bossism “crystallized with Bobby’s Kennedy’s singular crusade” and has “never receded, even though the idea was exaggerated at the time and is largely inaccurate today”  (p.108).

Hoffa was prosecuted for jury tampering and went to jail in March 1967, after Kennedy had left the Justice Department.  With Hoffa imprisoned, Frank Fitzsimmons stepped into the Teamsters leadership position and the wall that Hoffa had tried to maintain between organized crime and the union and its pension funds collapsed entirely.  The “sharpest irony” of Kennedy’s crusade against Hoffa, Goldsmith concludes, was that it “opened the door for the mob to infiltrate and leech off the union like never before.” (p. p.119-120).

President Richard Nixon, although viscerally anti-union, cynically sought Teamster votes for his 1972 re-election campaign by issuing a conditional pardon to Hoffa in 1971.  Nixon’s pardon released Hoffa from prison but required him to refrain from engaging in union activities, a condition Hoffa agreed to but in Chuckie’s view never intended to honor.  Hoffa’s efforts after his release to regain control of the union he once led included much erratic behavior, along with threats to expose the comfortable relationship between the union and criminal syndicates, threats that likely led to his disappearance in July 1975.  Did Chuckie have a role in that disappearance?

* * *

With the FBI under enormous pressure to make progress in the Hoffa case in the second half of 1975, its early belief in Chuckie’s role in driving Hoffa to his death in July 1975 became, as Goldsmith puts it, “one of the few unquestioned certainties in the case” (p.234).  But, he argues, the FBI, “focused on facts that fit its theory and ignored or discounted the many countervailing facts and circumstances that did not fit its theory but should have made it much less confident that Chuckie was involved” (p.234).  Goldsmith’s elaboration of these “countervailing facts and circumstances” reads like a transcript from a defense attorney’s closing argument to a jury.

Goldsmith stresses the unlikelihood  of using the car of Anthony Giacalone’s son: of all the cars available in the Motor City, “why use the car of the son of a leading Detroit mobster and the man who supposedly arranged for the hit or likely knew it was coming” (p.234).  He argues the logistical implausibility of Chuckie driving Hoffa to his death in the time gap when he was unseen on the afternoon of the disappearance.  He emphasizes that in the aftermath of the disappearance, Chuckie alone among the suspects spoke willingly and openly to the FBI, without any apparent repercussions from the perpetrators.  And he highlights numerous examples of post-disappearance conduct inconsistent with guilt.  As to the notion that Chuckie was the only person with whom Hoffa would have gotten into a car voluntarily, Goldsmith notes that the evidence does not show that Hoffa voluntarily got into the car.  The problems with the FBI’s theory of the case, Goldsmith acknowledges, “do not by themselves exonerate Chuckie.  But they stand as mysterious and unexplained counterpoints to the circumstantial evidence against him” (p.237).

By the time Goldsmith became involved in Chuckie’s case, sometime around 2012, these problems had convinced a handful of FBI agents in its Detroit office of Chuckie’s innocence.  At one point, after Chuckie had successfully passed an FBI-administered polygraph test, Goldsmith thought he had brokered an official FBI letter exonerating his stepfather.  Unfortunately for the hapless Chuckie, the idea of a letter was nixed by political appointees at the FBI and the Department of Justice who, in Goldsmith’s view, “didn’t want to take the political heat from admitting the government’s errors during the last four decades” (p.285).  Goldsmith’s book is as close as Chuckie came in his lifetime to an official exoneration.

* * *

Chuckie was not initially enamored about the idea of Goldsmith writing a book about him, but thought his hotshot Harvard Law School stepson represented his best opportunity for the exoneration he so desperately sought.  The stepson pledged that he would do his best, but only if his step father told him the truth.  Each was aware of the other’s mixed motives.  Chuckie was “committed to Omertà, I was committed to its opposite,” Goldsmith writes, “but we were both committed to each other.”  Out of affection, both tried hard to “help or accommodate the other . . . I was always on guard for mendacity or deflection.  He was always on guard for forbidden topics, and was brilliant, when he wanted to be, at resisting my probes” (p.301).

From the beginning, Goldsmith hoped that his efforts to clear Chuckie would also solve the Hoffa puzzle.  Goldsmith suspected that Chuckie knew a whole lot more about Hoffa’s disappearance than what he was telling him, maybe the full truth.  As Goldsmith tells the story, he came at least closer to the truth.  But Chuckie adhered to the code of Omertà throughout their discussions.  At one point, at a time when Chuckie was in poor health and may well have been aware he didn’t have long to live, Goldsmith told his stepfather that he couldn’t believe he would take to the grave his knowledge of what had been one of the most spectacular unsolved crimes of the 20thcentury.  Chuckie’s response:” “Believe it, Jack” (p.305).


Thomas H. Peebles

Paris, France

January 10, 2021




Filed under History, United States History