Category Archives: American Society

Taking Exception To American Foreign Policy

Andrew Bacevich, After the Apocalypse:

America’s Role in a World Transformed (Metropolitan Books 2020)

Andrew Bacevich is one of America’s most relentless and astute critics of United States foreign policy and the role the American military plays in the contemporary world.  Professor Emeritus of History and International Relations at Boston University and presently president of the Quincy Institute for Responsible Statecraft, Bacevich is a graduate of the United States Military Academy who served in the United States Army for over 20 years, including a year in Vietnam.  In his most recent book, After the Apocalypse: America’s Role in a World Transformed, which came out toward the end of 2020, Bacevich makes an impassioned plea for a smaller American military, a demilitarized and more humble US foreign policy, and more realistic assessments of US security and genuine threats to that security, along with greater attention to pressing domestic needs.  Linking these strands is Bacevich’s scathing critique of American exceptionalism, the idea that the United States has a special role to play in maintaining world order and promoting American democratic values beyond its shores.

In February 2022, as I was reading, then writing and thinking about After the Apocalypse, Vladimir Putin continued amassing soldiers on the Ukraine border and threatening war before invading the country on the 24th.  Throughout the month, I found my views of Bacevich’s latest book taking form through the prism of events in Ukraine.   Some of the book’s key points — particularly on NATO, the role of the United States in European defense, and yes, Ukraine – seemed out of sync with my understanding of the facts on the ground and in need of updating. “Timely” did not appear to be the best adjective to apply to After the Apocalypse. 

Bacevich is a difficult thinker to pigeonhole.  While he sometimes describes himself as a conservative,  in After the Apocalypse he speaks the language of those segments of the political left that border on isolationist and recoil at almost all uses of American military force (these are two distinct segments: I find myself dependably in the latter camp but have little affinity with the former).  But Bacevich’s against-the-grain perspective is one that needs to be heard and considered carefully, especially when war’s drumbeat can be heard.

* * *

Bacevich’s recommendations in After the Apocalypse for a decidedly smaller footprint for the United States in its relations with the world include a gradual US withdrawal from NATO, which he considers a Cold War relic, an “exercise in nostalgia, an excuse for pretending that the past is still present” (p.50).  Defending Europe is now “best left to Europeans” (p.50), he argues.   In any reasoned reevaluation of United States foreign policy priorities, moreover, Canada and Mexico should take precedence over European defense.  Threats to Canadian territorial sovereignty as the Artic melts “matter more to the United States than any danger Russia may pose to Ukraine” (p.169).

I pondered that sentence throughout February 2022, wondering whether Bacevich was at that moment as unequivocal about the United States’ lack of any geopolitical interest in Ukraine as he had been when he wrote After the Apocalypse.  Did he still maintain that the Ukraine-Russia conflict should be left to the Europeans to address?  Was it still his view that the United States has no business defending beleaguered and threatened democracies far from its shores?  The answer to both questions appears to be yes.  Bacevich has had much to say about the conflict since mid-February of this year, but I have been unable to ascertain any movement or modification on these and related points.

In an article appearing in the February 16, 2022, edition of The Nation, thus prior to the invasion, Bacevich described the Ukrainian crisis as posing “minimal risk to the West,” given that Ukraine “possesses ample strength to defend itself against Russian aggression.”  Rather than flexing its muscles in faraway places, the United States should be “modeling liberty, democracy, and humane values here at home. The clear imperative of the moment is to get our own house in order” and avoid “[s]tumbling into yet another needless war.”   In a nutshell, this is After the Apocalypse’s broad vision for American foreign policy. 

Almost immediately after the Russian invasion, Bacevich wrote an OpEd for the Boston Globe characterizing the invasion as a “crime” deserving of “widespread condemnation,” but cautioning against a “rush to judgment.”  He argued that the United States had no vital interests in Ukraine, as evidenced by President Biden’s refusal to commit American military forces to the conflict.  But he argued more forcefully that the United States lacked clean hands to condemn the invasion, given its own war of choice in Iraq in 2003 in defiance of international opinion and the “rules-based international order” (Bacevich’s quotation marks).  “[C]coercive regime change undertaken in total disregard of international law has been central to the American playbook in recent decades,” he wrote.  “By casually meddling in Ukrainian politics in recent years,” he added, alluding most likely to the United States’ support for the 2013-14 “Euromaidan protests” which resulted in the ouster of pro-Russian Ukrainian president Viktor Yanukovych, it had “effectively incited Russia to undertake its reckless invasion.”

Bacevich’s article for The Nation also argued that the idea of American exceptionalism was alive and well in Ukraine, driving US policy.  Bacevich defined the idea hyperbolically as the “conviction that in some mystical way God or Providence or History has charged America with the task of guiding humankind to its intended destiny,” with these ramifications:

We Americans—not the Russians and certainly not the Chinese—are the Chosen People.  We—and only we—are called upon to bring about the triumph of liberty, democracy, and humane values (as we define them), while not so incidentally laying claim to more than our fair share of earthly privileges and prerogatives . . . American exceptionalism justifies American global primacy.

Much  of Bacevich’s commentary about the Russian invasion of Ukraine reflects his impatience with short and selected historical memory.  Expansion of NATO into Eastern Europe in the 1990s, Bacevich told Democracy Now in mid-March of this year, “was done in the face of objections by the Russians and now we’re paying the consequences of those objections.”  Russia was then “weak” and “disorganized” and therefore it seemed to be a “low-risk proposition to exploit Russian weakness to advance our objectives.”  While the United States may have been advancing the interests of Eastern European countries who “saw the end of the Cold War as their chance to achieve freedom and prosperity,” American decision-makers after the fall of the Soviet Union nonetheless  “acted impetuously and indeed recklessly and now we’re facing the consequences.”

* * *

“Short and selected historical memory” also captures Bacevich’s objections to the idea of American exceptionalism.  As he articulates throughout After the Apocalypse, the idea constitutes a whitewashed version of history, consisting “almost entirely of selectively remembered events” which come “nowhere near offering a complete and accurate record of the past” (p.13).  Recently-deceased former US Secretary of State Madeline Albright’s 1998 pronouncement that America resorts to military force because it is the “indispensable nation” which “stand[s] tall and see[s] further than other countries into the future” (p.6) may be the most familiar statement of American exceptionalism.  But versions of the idea that the United States has a special role to play in history and in the world have been entertained by foreign policy elites of both parties since at least World War II, with the effect if not intention of ignoring or minimizing the dark side of America’s global involvement.

 The darkest in Bacevich’s view is the 2003 Iraq war, a war of choice for regime change,  based on the false premise that Saddam Hussein maintained weapons of mass destruction.  After the Apocalypse returns repeatedly to the disastrous consequences of the Iraq war, but it is far from the only instance of intervention that fits uncomfortably with the notion of American exceptionalism. Bacevich cites the CIA-led coup overthrowing the democratically elected government of Iran in 1953, the “epic miscalculation” (p.24) of the Bay of Pigs invasion in 1961, and US complicity in the assassination of South Vietnamese president Ngo Dinh Diem in 1963, not to mention the Vietnam war itself.  When commentators or politicians indulge in American exceptionalism, he notes, they invariably overlook these interventions.

A  telling example is an early 2020 article in  Foreign Affairs by then-presidential candidate Joe Biden.  Under the altogether conventional title “Why America Must Lead Again,” Biden contended that the United States had “created the free world” through victories in two World Wars and the fall of the Berlin Wall.  The “triumph of democracy and liberalism over fascism and autocracy,” Biden wrote, “does not just define our past.  It will define our future, as well” (p.16).  Not surprisingly, the article omitted any reference to Biden’s support as chairman of the Senate Foreign Relations Committee for the 2003 invasion of Iraq.

Biden had woven “past, present, and future into a single seamless garment” (p.16), Bacevich contends.  By depicting history as a “story of America rising up to thwart distant threats,” he had regurgitated a narrative to which establishment politicians “still instinctively revert in stump speeches or on patriotic occasions” (p.17) — a narrative that in Bacevich’s view “cannot withstand even minimally critical scrutiny” (p.16).  Redefining the United States’ “role in a world transformed,” to borrow from the book’s subtitle, will remain “all but impossible until Americans themselves abandon the conceit that the United Sates is history’s chosen agent and recognize that the officials who call the shots in Washington are no more able to gauge the destiny of humankind than their counterparts in Berlin or Baku or Beijing” (p.7).

Although history might well mark Putin’s invasion of Ukraine as an apocalyptic event and 2022 as an apocalyptic year, the “apocalypse” of Bacevich’s title refers to the year 2020, when several events brought into plain view the need to rethink American foreign policy.  The inept initial response to the Covid pandemic in the early months of that year highlighted the ever-increasing economic inequalities among Americans.  The killing of George Floyd demonstrated the persistence of stark racial divisions within the country.  And although the book appeared just after the presidential election of 2020, Bacevich would probably have included the assault on the US Capitol in the first week of 2021, rather than the usual transfer of presidential power, among the many policy failures that in his view made the year apocalyptic.  These failures, Bacevich intones:

 ought to have made it clear that a national security paradigm centered on military supremacy, global power projection, decades old formal alliances, and wars that never seemed to end was at best obsolete, if not itself a principal source of self-inflicted wounds.  The costs, approximately a trillion dollars annually, were too high.  The outcomes, ranging from disappointing to abysmal, have come nowhere near to making good on promises issued from the White House, the State Department, or the Pentagon and repeated in the echo chamber of the establishment media (p.3).

In addition to casting doubts on the continued viability of NATO and questioning any US interest in the fate of Ukraine, After the Apocalypse dismisses as a World War II era relic the idea that the United States belongs to a conglomeration of nations known as  “the West,” and that it should lead this conglomerate.  Bacevich advocates putting aside ”any residual nostalgia for a West that exists only in the imagination” (p.52).  The notion collapsed with the American intervention in Iraq, when the United States embraced an approach to statecraft that eschewed diplomacy and relied on the use of armed force, an approach to which Germany and France objected.   By disregarding their objections and invading Iraq, President George W. Bush “put the torch to the idea of transatlantic unity as a foundation of mutual security” (p.46).  Rather than indulging the notion that whoever leads “the West” leads the world, Bacevich contends that the United States would be better served by repositioning itself as a “nation that stands both apart from and alongside other members of a global community” (p.32).

After the apocalypse – that is, after the year 2020 – the repositioning that will redefine America’s role in a world transformed should be undertaken from what Bacevich terms a “posture of sustainable self-sufficiency” as an alternative to the present “failed strategy of military hegemony (p.166).   Sustainable self-sufficiency, he is quick to point out, is not a “euphemism for isolationism” (p.170).  The government of the United States “can and should encourage global trade, investment, travel, scientific collaboration, educational exchanges, and sound environmental practices” (p.170).  In the 21st century, international politics “will – or at least should – center on reducing inequality, curbing the further spread of military fanaticism, and averting a total breakdown of the natural world” (p.51).  But before the United States can lead on these matters, it “should begin by amending its own failings (p.51),” starting with concerted efforts to bridge the racial divide within the United States.

A substantial portion of After the Apocalypse focuses on how racial bias has infected the formulation of United States foreign policy from its earliest years.  Race “subverts America’s self-assigned role of freedom,” Bacevich writes.  “It did so in 1776 and it does so still today” (p.104).  Those who traditionally presided over the formulation of American foreign policy have “understood it to be a white enterprise.”  While non-whites “might be called upon to wage war,” he emphasizes, but “white Americans always directed it” (p.119).  The New York Times’ 1619 Project, which seeks to show the centrality of slavery to the founding and subsequent history of the United States, plainly fascinates Bacevich.  The project in his view serves as an historically based corrective to another form of American exceptionalism, questioning the “very foundation of the nation’s political legitimacy” (p.155).

After the Apocalypse raises many salient points about how American foreign policy interacts with other priorities as varied as economic inequality, climate change, health care, and rebuilding American infrastructure.  But it leaves the impression that America’s relationships with the rest of the world have rested in recent decades almost exclusively on flexing American military muscle – the “failed strategy of militarized hegemony.”  Bacevich says little about what is commonly termed “soft power,” a fluid term that stands in contrast to military power (and in contrast to punitive sanctions of the type being imposed presently on Russia).  Soft power can include such forms of public diplomacy  as cultural and student exchanges, along with technical assistance, all of which   have a strong track record in quietly advancing US interests abroad.

* * *

To date, five full weeks into the Ukrainian crisis, the United States has conspicuously rejected the “failed strategy of militarized hegemony.”  Early in the crisis, well before the February 24th invasion, President Biden took the military option off the table in defending Ukraine.  Although Ukrainians would surely welcome the deployment of direct military assistance on their behalf, as of this writing NATO and the Western powers are fighting back through stringent economic sanctions – diplomacy with a very hard edge – and provision of weaponry to the Ukrainians so they can fight their own battle, in no small measure to avoid a direct nuclear confrontation with the world’s other nuclear superpower.

The notion of “the West” may have seemed amorphous and NATO listless prior to the Russian invasion.  But both appear reinvigorated and uncharacteristically united in their determination to oppose Russian aggression.  The United States, moreover, appears to be leading both, without direct military involvement but far from heavy-handedly, collaborating closely with its European and NATO partners.  Yet, none of Bacevich’s writings on Ukraine hint that the United States might be on a more prudent course this time.

Of course, no one knows how or when the Ukraine crisis will terminate.  We can only speculate on the long-term impact of the crisis on Ukraine and Russia, and on NATO, “the West,” and the United States.  Ukraine 2022 may well figure as a future data point in American exceptionalism, another example of the “triumph of democracy and liberalism over fascism and autocracy,” to borrow from President Biden’s Foreign Affairs article.  But it could also be one of the data points that its proponents choose to overlook.

Thomas H. Peebles

La Châtaigneraie, France

March 30, 2022





Filed under American Politics, American Society, Eastern Europe, Politics

American Polarizer




James Shapiro, Shakespeare in a Divided America:

What His Plays Tell Us About Our Past and Our Future

(Penguin Press, 2020)

In June 2017, New York City’s Public Theater staged a production in Central Park of William Shakespeare’s Julius Caesar, directed by Oskar Eustis, as part of the series known as Shakespeare in the Park.  As in many 21st century Shakespeare productions, non-whites had several leading roles and women played men’s parts.  Eustis’ Caesar, knifed to death in Act III, bore more than passing resemblance to President Donald J. Trump: he had strange blond hair, wore overly long red ties, tweeted from a golden bathtub, and had a wife with a Slavic accent.

A protestor interrupted one of the early productions, jumping on stage after the assassination of Caesar to shout, “This is violence against Donald Trump,” according to The New York Times.  Breitbart News picked up on the story with the headline “’’Trump’ stabbed to death.”  Fox News weighed in, expressing concern that the play encouraged violence against the president.  Corporate sponsors pulled out.  Threats were levied not only against the Public Theater and its actors, but also against other Shakespeare productions throughout the country.  A fierce but unedifying battle was fought on social media, with little regard for the ambiguities underlying Caesar’s assassination in the play.

The polemic engendered by Eustis’ Julius Caesar unsettled Columbia University Professor James Shapiro, one of academia’s foremost Shakespeare experts.  Shapiro also serves as Shakespeare Scholar in Residence at the Public Theater and in that capacity had advised Eustis’ team on some of the play’s textual issues. His most recent work, Shakespeare in a Divided America: What His Plays Tell Us About Our Past and Our Future, constitutes his response to the polemic, in which he demonstrates convincingly that the frenzied reaction to the 2017 Julius Caesar performance was no aberrational moment in American history.

Starting and finishing with the 2017 performance, Shapiro identifies seven other historical episodes in which a Shakespeare play has been enmeshed in the nation’s most divisive issues: racism, slavery, class conflict, nationalism, immigration, the role of women, adultery and same sex love.  Each episode constitutes a separate chapter with a specific year. Shapiro dives deeply and vividly into the circumstances surrounding all seven, revealing a flair for writing and recounting American history that rivals what he brings to his day job as an interpreter of Shakespeare, his plays and his age.  Of the seven episodes, the most gripping is his description of the 1849 riot at New York City’s upscale Astor Place Opera House, one of the worst in the city’s history up to that point.  By comparison, the 2017 brouhaha over Julius Caesar seems like a Columbia graduate school seminar on Shakespeare.

* * *

Fueled by raw class conflict, nationalism and anti-British sentiment, the Astor Place riot was described in one newspaper as the “most sanguinary and cruel [massacre] that has ever occurred in this country,” an episode of “wholesale slaughter” (p.49)— all arising out of competing versions of Macbeth, starring competing actors.  The Briton William Macready, performing as Macbeth at Astor Place, and the American Edwin Forrest, simultaneously rendering Macbeth at the Bowery Theatre, only a few blocks away but in a decidedly rougher part of town, offered opposing approaches to playing Macbeth that seemed to highlight national differences between the United States and Great Britain: Forrest, the “brash American, Macready the sensitive Englishman” (p.66).  Macready’s “accent, gentle manliness, and propriety represented a world that was being overtaken by everything that Forrest, guiding spirit of the new and for many coarser age of Manifest Destiny, represented”  (p.66), Shapiro writes.

Shapiro’s description of the riot underscores how theatres in a rapidly growing New York City in the 1840s were democratic meeting points.  They were  “one of the few places in town where classes and races and sexes, if they did not exactly mingle, at least shared a common space. This meant, in practice, that the inexpensive benches in the pit were filled mostly by the working class, the pricier boxes and galleries were occupied by wealthier patrons, and in the tiers above, space was reserved for African Americans and prostitutes” (p.56).  The Astor Place Opera House, built in 1847, was an explicit response of New York’s upper crust to these democratizing tendencies. It did not admit unaccompanied women – there was no place for prostitutes – and it imposed a dress code.  The new rules were seen as fundamentally undemocratic, especially to the city’s large number of recent German and Irish immigrants.

While Forrest opened at the Bowery, Forrest fans somehow obtained tickets to the opening Astor Place performance—who paid for them, Shapiro indicates, remains a mystery—and began heckling Macready, telling him to get off the stage, “you English fool.”  Three days later, the heckling recurred.  But this time a crowd of about 10,000 had gathered outside, an unruly mix of Irish immigrants and native-born Americans, groups that had common cause in anti-English and anti-aristocratic sentiment (many of the Irish immigrants were escaping the Irish potato famine of the mid-1840s, often attributed to harsh British policies; see my 2014 review here of John Kelly’s The Graves Are Walking: The Great Famine and the Saga of the Irish People).  Incited by political leaders and their cronies, the crowd began to throw bricks and stones. They fought a battle with police that continued for several days, with dozens of deaths on both sides.

There were “no winners in the Astor Place riots,” Shapiro writes. The mayhem “brought into sharp relief the growing problem of income inequality in an America that preferred the fiction that it was still a classless society” (p.76).  But the riots also spoke to an “intense desire by the middle and lower classes to continue sharing the public space [of the theatre], and to oppose, violently if necessary, efforts to exclude them from it.  Shakespeare continued to matter and would remain common cultural property in America” (p.78).

In two other powerful chapters, Shapiro demonstrates how Shakespeare’s plays also intertwined with mid-19thcentury America’s excruciating attempts to come to terms with racism and slavery.  One examines abolitionist former president John Quincy Adams’ public feud in the 1830s over what he considered the abominable inter-racial relationship Shakespeare depicts in Othello between Desdemona and the dark-skinned Othello.  In the second, Shapiro shows how, in a twist that was itself Shakespearean, fate linked President Abraham Lincoln, a man who loved Shakespeare and identified with Macbeth, to his assassin, second-rate Shakespearean actor John Wilkes Booth, himself obsessed with both Julius Caesar and what he perceived as Lincoln’s efforts to undermine the supremacy of the white race.

John Quincy Adams, who served as president from 1825 to 1829, found Desdemona’s physical intimacy with Othello, known at the time as “amalgamation” (“miscegenation” did not enter the national vocabulary until the 1860s), to be an “unnatural passion” against the laws of nature.  Adams’ views might have gone largely unnoticed but for a dinner party in 1833, in which the 66 year old former president was seated next to 23 year old Fanny Kemble, a rising young Shakespearean actress from England.  Adams apparently thrust his views of the Othello-Desdemona relationship upon the unsuspecting Kemble.

Two years later, Kemble published a journal about her trip to the United States, in which she described her dinner conversation with the former president.  A piqued Adams felt compelled to respond, elaborating in print about how repellent he found the Desdemona-Othello relationship. The dinner conversation of two years earlier between the ex-president and the rising British actress thus became national news and, with it, Adams’ anxieties about not only the dangers of race-mixing but also the threat posed by disobedient women.

Yet, the ex-president who was so firmly against amalgamation was also a firm abolitionist.  Adams’ abolitionist convictions, Shapiro writes, “seem to have required a counterweight, and he found it in this repudiation of amalgamation” (p.20).  By directing his hostility at Desdemona rather than Othello, moreover, Adams astutely sidestepped criticizing black men, and it “proved more convenient to attack a headstrong young fictional woman than a living one” (p.20).  Although a prolific writer, Adams’ public feud with Kemble represented his sole written attempt to square his disgust for interracial marriage with his abolitionist convictions, and he chose to do so “only through his reflections on Shakespeare” (p.20).

Abraham Lincoln, from humble frontier origins with almost no formal schooling, developed a life-long passion for Shakespeare as a youth.  Shapiro notes that the adult Lincoln regularly asked friends, family, government employees, and relative strangers to listen to him recite, sometimes for hours on end – and then discuss – the same few passages from Shakespeare again and again.  John Wilkes Booth too grew up with Shakespeare, but in altogether different circumstances.

Booth’s father owned a farm in rural Maryland but was also a leading English Shakespearean actor who immigrated to the United States and became a major figure on the American stage.  His three sons followed in their father’s footsteps, with older brothers Edwin and Julius attaining genuine star status, a status that eluded their younger brother John.  Although Maryland was a border state that did not join the Confederacy, John, who had been convinced from his earliest years that whites were superior to blacks, was naturally drawn to the Southern cause.

In 1864, both the year of Lincoln’s re-election and the 300th anniversary of Shakespeare’s birth, Booth was stalking Lincoln and plotting his removal with Confederate operatives.  Lincoln, who had less than six months to live when he was re-elected in November, found himself brooding more and more about Macbeth in his final months, and especially about the murdered King Duncan.  Through his reflection upon the guilt-ridden Macbeth, Shapiro writes, Lincoln felt the “deep connection between the nation’s own primal sin, slavery, and the terrible cost, both collective and personal, exacted by it” (p.113)

After Booth assassinated Lincoln at Ford’s Theater in Washington in April 1865, many of Lincoln’s enemies likened the assassin, whose favorite play was Julius Caesar, to Brutus as a man who killed a tyrant.  But Macbeth proved to be the play that the nation settled on to “give voice to what happened, and define how Lincoln was to be  remembered”(p.116).  Booth had “failed to anticipate that the man he cold-bloodedly murdered would be revered like Duncan, his faults forgotten” (p.118).  For a divided America, the universal currency of Shakespeare’s words offered what Shapiro terms a “collective catharsis” which permitted a “blood-soaked nation to defer confronting once again what Booth declared had driven him to action: the conviction that American ‘was formed for the white not for the black man’” (p.118).

The year 1916 was the 300th anniversary of Shakespeare’s death, a year in which one of his least known plays, The Tempest, was used to bolster the case for anti-immigration legislation. The Tempest centers on Caliban, who is left behind, rather than on those who immigrate.  But the point is the same, Shapiro argues: a “more hopeful community . . . depends on somebody’s exclusion” (p.125).  This notion resonated in particular with Massachusetts Senator Henry Cabot Lodge, an avid Shakespeare reader who led the early 20th century anti-immigration campaign.

The unusual number of performances of The Tempest during that tercentenary year meshed with the fierce debate that Lodge led in Congress over immigration.  The legislation that passed the following year curtailed the influx into the United States of immigrants representing “lesser races,” most frequently a reference to Southern and Eastern Europeans. “How Shakespeare and especially The Tempest were conscripted by those opposed to the immigration of those deemed undesirable is a lesser known part of this [immigration] story” (p.124), Shapiro writes.

Closer to the present, Shapiro has chapters on the 1948 Broadway musical, play, Kiss Me, Kate, later a film, about the cast of Shakespeare’s The Taming of the Shrew, which raised the issue of the roles of women in a post-war society; and on the 1998 film Shakespeare in Love, by far the most successful film to date about Shakespeare or any of his plays, which began as a film about same-sex love but evolved into one about adultery.

Kiss Me, Kate takes place at the backstage of a performance of The Taming of the Shrew.  With music and lyrics provided by Cole Porter, the Broadway musical contrasted the emerging, post-World War II view of the role of women with the conventional stereotyped gender roles in the Shakespeare play itself, thereby featuring “rival visions of the choices women faced in postwar America” (p.160).  In Shakespeare’s play, “women are urged to capitulate and their obedience to men is the norm,” while backstage “independence and unconventionality hold sway” (p.160).  Kiss Me, Kate deftly juxtaposed a “front stage Shakespeare world that mirrored the fantasy of a patriarchal, all-white America” with a backstage one that was “forthright about a woman’s say over her desires and her career” (p.162).

In the earliest version of the film Shakespeare in Love in 1992, Will found himself attracted to the idea of same sex attraction (he was actually attracted to a woman dressed as a man, but the point was that Will thought she was a he).  But same sex love was reduced to a mere hint in the final version, about how the unhappily married Will’s affair with another woman, Viola, helped him overcome his writer’s block, finish Romeo and Juliet, and go on to greatness.  Those creating and marketing Shakespeare in Love, Shapiro writes, “clearly felt that a gay or bisexual Shakespeare was not something that enough Americans in the late 1990s were ready to accept” (p.194).  For box-office success, “Shakespeare could be an adulterer, but he had to be a heterosexual one in a loveless marriage” (p.194).

Shakespeare in Love ends with Viola leaving Will and England for America, reinforcing a myth that persisted from the 1860s through the 1990s of a direct American connection to Shakespeare  — anti-immigration Senator Lodge was one of its most exuberant proponents.  This fantasy, Shapiro writes, speaks to our desire to “forge a physical connection between Shakespeare and America” as the land where his “inspiring legacy came to rest and truly thrived” (p. 193).

* * *

While finding no credible evidence for a direct American  connection to Shakespeare, Shapiro sees a legacy in Shakespeare’s plays that should inspire Americans of all hues and stripes.  Pained by the polarization he witnessed at the 2017 Julius Caesar performance, Shapiro expresses the hope that his book might “shed light on how we have arrived at our present moment, and how, in turn, we may better address that which divides and impedes us as a nation” (p.xxix).  The hope seems forlorn in light of the examples he so brilliantly details, pointing mostly in the other direction: a Shakespeare on the cutting edge of America’s social and political divisions, with his plays often doing the cutting.

Thomas H. Peebles

Paris, France

September 19, 2021

[NOTE: A nearly identical version of this review has also been posted to the Tocqueville 21 Blog, maintained in connection with the American University of Paris’ Tocqueville Review and its Center for Critical Democracy Studies]





Filed under American Politics, American Society, Literature, Politics, United States History

Converging Visions of Equality


Peniel E. Joseph, The Sword and the Shield:

The Revolutionary Lives of Malcolm X and Martin Luther King, Jr. (Basic Books)

[NOTE: A version of this review has been posted to the Tocqueville 21 blog:  Tocqueville 21 takes its name from the 19th century French aristocrat who gave Americans much insight into their democracy.  It seeks to encourage in-depth thinking about democratic theory and practice, with particular but by no means exclusive emphasis on the United States and France.  The site is maintained in connection with the American University of Paris’ Tocqueville Review and its Center for Critical Democracy Studies].

Martin Luther King, Jr., and Malcolm X met only once, a chance encounter at the US Capitol on March 26, 1964.  The two men were at the Capitol to listen to a debate over what would become the Civil Rights Act of 1964, a measure that banned discrimination in employment, mandated equal access to most public facilities, and had the potential to be the most consequential piece of federal legislation on behalf of equality for African-Americans since the Reconstruction era nearly a century earlier.  There wasn’t much substance to the encounter. “Well, Malcolm, good to see you,” King said.  “Good to see you,” Malcolm responded. There may have been some additional light chitchat, but not much more.  Fortunately, photographers were present, and we are the beneficiaries of several iconic photos of the encounter.

That encounter at the Capitol constitutes the starting point for Peniel Joseph’s enthralling The Sword and the Shield: The Revolutionary Lives of Malcolm X and Martin Luther King, a work that has some of the indicia of a dual biography, albeit highly condensed.  But Joseph, a professor at the University of Texas at Austin who has written prolifically on modern African American history, places his emphasis on the two men’s intellectual journeys.  Drawing heavily from their speeches, writings and public debates, Joseph challenges the conventional view of the two men as polar opposites who represented competing visions of full equality for African Americans.  The conventional view misses the nuances and evolution of both men’s thinking, Joseph argues, obscuring the ways their politics and activism came to overlap.  Each plainly influenced the other.  “Over time, each persuaded the other to become more like himself” (p.13).

My final stages of this review on the convergence of the two men’s thinking coincided with the trial of Derek Chauvin for the killing of George Floyd last May, along with the recent killing of still another black man, Daunte Wright, in the same Minneapolis metropolitan area.  Watching and reading about events in Minneapolis, I couldn’t help concluding that the three familiar words “Black Lives Matter”  –  the movement that led demonstrations across the country and the world last year to protest the Floyd killing — also neatly encapsulate the commonalities that Joseph identifies in The Sword and the Shield.

* * *

In March 1964, King was considered the “single most influential civil rights leader in the nation” (p.2), Joseph writes, whereas Malcolm, an outlier in the mainstream civil rights movement, was “perhaps the most vocal critic of white supremacy ever produced by black America” (p.4).    The two men shared extraordinary rhetorical and organizational skills.  Each was a charismatic leader and deep thinker who articulated in galvanizing terms his vision of full equality for African Americans.  But these visions sometimes appeared to be not just polar opposites but mutually exclusive.

In the conventional view of the time, King, the Southern Baptist preacher with a Ph.D. in theology, deserved mainstream America’s support as the civil rights leader who sought integration of African Americans into the larger white society, and unfailingly advocated non-violence as the most effective means to that end.  White liberals held King in high esteem for his almost religious belief in the potential of the American political system to close the gap between its lofty democratic rhetoric and the reality of pervasive racial segregation, discrimination and second-class citizenship, a belief Malcolm considered naïve.

A high school dropout who had served time in jail, Malcolm became the most visible spokesman for the Nation of Islam (NOI), an idiosyncratic American religious organization that preached black empowerment and racial segregation.  Often termed a “black nationalist,” Malcolm found the key to full equality in political and economic empowerment of African American communities.  He considered racial integration a fool’s errand and left open the possibility of violence as a means of defending against white inflicted violence.  He seemed to embrace some form of racial separation as the most effective means to achieve full equality and improve the lives of black Americans – a position that the media found to be ironically similar to that of the hard-core racial segregationists with whom both he and King were battling.

But Joseph demonstrates that Malcolm was moving in King’s direction at the time of their March 1964 encounter.  Coming off a bitter fallout with the NOI and its leader, Elijah Muhammad, he had cut his ties with the organization just months before the encounter.  He had traveled to Washington to demonstrate his support for the civil rights legislation under consideration.  Thinking he could make a contribution to the mainstream civil rights movement, Malcolm sought an alliance with King and his allies.  Although that alliance never materialized, King began to embrace positions identified with Malcolm after the latter’s assassination less than 11 months later, stressing in particular that economic justice needed to be a component of full equality for African Americans.  King also became an outspoken opponent of American involvement in the war in Vietnam, of which Malcolm long been had critical.

Singular events had thrust both men onto the national stage.  King rose to prominence as a newly-ordained minister who at age 26 became the most audible voice of the 1955-56 Montgomery, Alabama, bus boycott, after Rosa Parks famously refused to give up her seat on a public bus to a white person.  Malcolm’s rise to fame came in 1959 through a nationally televised 5-part CBS documentary on the NOI, The Hate that Hate Produced, hosted by then little-known Mike Wallace.  The documentary was an immediate sensation.  It was a one-sided indictment of the NOI, Joseph indicates, intended to scare and outrage whites.  But it made Malcolm and his NOI boss Elijah Muhammad heroes within black communities across the country.  King seemed to buy into the documentary’s theme, describing the NOI as an organization dedicated to “black supremacy,” which he considered “as bad as white supremacy” (p.85).

But even at this time, each man had connected his US-based activism to anti-colonial movements that were altering the face of Africa and Asia.  Both recognized that the systemic nature of racial oppression “transcended boundaries of nation-states” (p.73).    Malcolm made his first trip abroad in 1959, to Egypt and Nigeria.  The trip helped him “internationalize black political radicalism,” by linking domestic black politics to the “larger world of anti-colonial and Third World liberation movements” (p.18-19), as Joseph puts it.  King, whose philosophy of non-violence owed much to Mahatmas Gandhi, visited India in 1959, characterizing himself as a “‘pilgrim’ coming to pay homage to a nation liberated from colonial oppression against seemingly insurmountable odds”  (p.80).   After the visit, he “proudly claimed the Third World as an integral part of a worldwide social justice movement” (p.80).

After his break with the NOI and just after his chance encounter with King at the US Capitol, Malcolm took a transformative five-week tour of Africa and the Middle East in the spring of 1964.  The tour put him on the path to becoming a conventional Muslim and prompted him to back away from anti-white views he had expressed while with the NOI.  In Mecca, Saudi Arabia, he professed to see “sincere and true brotherhood practiced by all colors together, irrespective of their color.” (p.188).   He went on to Nigeria and “dreamed of becoming the leader of a political revolution steeped in the anti-colonial fervor sweeping Africa” (p.191).  Malcolm’s time in Africa, Joseph concludes, “changed his mind, body, and soul . . . The African continent intoxicated Malcolm X and informed his political dreams” (p.192-93).

By the time of their March 1964 meeting, moreover, the two men had begun to recognize each other’s potential.  After over a decade of forcefully criticizing the mainstream civil rights movement, Malcolm now recognized King’s goals as his own but chose different methods to get there.  Malcolm also had a subtle effect on King.  The “more he ridiculed and challenged King publicly,” Joseph writes, the more King “reaffirmed the strength of non-violence as a weapon of peace capable of transforming American democracy” (p.155).  King for his part had begun to look outside the rigidly segregated South and toward major urban centers in the North, Malcolm’s bailiwick, as possible sites of protest that would expand the freedom struggle beyond its southern roots.

Joseph cites three instances in which Malcolm extended written invitations to King, all of which went unanswered. But in early February 1965, after Malcolm had participated in a panel discussion with King’s wife, King concluded that the time had come to meet with his formidable peer.  Later that month, alas, Malcolm was gunned down in New York, almost certainly the work of the NOI, although details of the assassination remain murky to this day.

In the three years remaining to him after Malcolm’s assassination, King borrowed liberally from the black nationalist’s playbook, embracing in particular the notion of economic justice as a necessary component of full equality for African Americans.  Although he never wavered in his commitment to non-violence, King saw his cause differently after the uprising in the Watts section of Los Angeles in the summer of 1965.  Watts “transformed King,” Joseph writes, making clear that civil unrest in Northern cities was a “product of institutional racism and poverty that required far more material and political resources than ever imagined by the architects of the Great Society” (p.235).  King also began to speak out publicly in 1965 against the escalation of America’s military commitment in Vietnam, marking the beginning of the end of his close relationship with President Johnson.

King delivered his most pointed criticism of the war on April 4, 1967, precisely one year prior to his assassination, at the Riverside Church in New York City, abutting Harlem, Malcolm’s home base.  Linking the war to the prevalence of racism and poverty in the United States, King lamented the “cruel irony of watching Negro and white boys on TV screens as they kill and die together for a nation that has been unable to seat them together in the same schools.” (p.267).  Joseph terms King’s Riverside Church address the “boldest political decision of his career” (p.268).  It was the final turning point for King, marking his formal break with mainstream politics and his “full transition” from a civil rights leader to a “political revolutionary” who “refused to remain quiet in the face of domestic and international crises” (p.268).

After Riverside, in his last year, King became what Joseph describes as America’s “most well-known anti-war activist” (p.271).  King lent a Nobel Prize-winner’s prestige to a peace movement struggling to find its voice at a time when most Americans still supported the war.  Simultaneously, he pushed for federally guaranteed income, decent and racially integrated housing and public schools — what he termed a “revolution of values” (p.287).  During this period, Stokely Carmichael, who once worked with King in Mississippi (and is the subject of a Joseph biography), coined the term “Black Power.”  In Joseph’s view, the Black Power movement represented the natural extension of Malcolm’s political philosophy, post-Malcolm. Although King frequently criticized the movement in his final years, he nonetheless found himself in agreement with much of its agenda.

In his final months. King supported a Poor People’s march on Washington, D.C.  He was in Memphis, Tennessee in April 1968 on behalf of striking sanitation workers, overwhelmingly African-American, who held jobs but were seeking better salaries and more humane working conditions, when he too was felled by an assassin’s bullet.

* * *

After reading Joseph’s masterful synthesis, it is easy to imagine Malcolm supporting King’s efforts in Memphis that April.  And if the two men were still with us today, it is it is equally easy to imagine both embracing warmly the “Black Lives Matter” movement.


Thomas H. Peebles

La Châtaigneraie, France

April 20, 2021





Filed under American Politics, American Society, Political Theory, United States History

Papa Franz’ Columbia Circle


Charles King, Gods of the Upper Air:

How A Circle of Renegade Anthropologists Reinvented Race, Sex, and Gender

In the Twentieth Century (Doubleday)


A book billed as an inside look at the anthropology department of Columbia University from the 1890s through the 1940s seems unlikely to send readers scurrying for a copy.  But readers might be inclined to scurry if they knew that in this timeframe, a small circle of anthropologists associated with Columbia essentially rewrote the books on anthropology and more generally on human nature, giving shape to modern ways in which we think about issues of race, sex and gender, along with what we mean by culture and how we might understand people living in societies very different from our own.  These epic transformations in thinking and the anthropologists behind them constitute the subject of Charles King’s engaging Gods of the Upper Air: How A Circle of Renegade Anthropologists Reinvented Race, Sex, and Gender In the Twentieth Century.

King’s work revolves around Franz Boas (1856-1942), who taught in Columbia’s anthropology department off and on from 1887 through the late 1930s, and three of his star students, all female: Margaret Mead (1901-1978), Ruth Benedict (1887-1948), and Zora Neale Hurston (1891-1960).   The cantankerous Papa Franz, as he was known, was a German immigrant who made a career of warning against jumping from one’s own “culture-bound schemas to pontificating about the Nature of Man” (p.247), as King puts it.  More than any other intellectual of his era, Boas attacked the pseudo-science that seemed to support society’s deepest prejudices, jousting frequently with late 19th and early 20th century racial theorists who “confidently pronounced that they had all of humanity figured out” (p.247).

Mead and Benedict are today better known than Boas, often thought of together as 20th century pioneers in anthropology and the social sciences. Mead gained fame for her studies of adolescent girls in far-flung places, and how they formed their attitudes toward sex and gender roles.  Benedict almost singlehandedly refined and redefined how we think about the word  “culture,” coining the term “cultural relativity.”  But the two pioneering anthropologists also enjoyed an intimate personal relationship throughout much of their adult lives, even as Mead regularly ran through and disposed of husbands.  King provides probing detail on the Mead-Benedict relationship and the many men in Mead’s complex personal life.   Hurston, African American, was a talented novelist, poet and essayist as well as anthropologist.  Although she lacked Mead or Benedict’s public profile in her lifetime, she has vaulted since her death into the upper echelon of 20th century African-American intellectuals, especially after being “rediscovered” by the poet and novelist Alice Walker in 1975.

King, a professor of international affairs and government at Georgetown University, ably captures how Papa Franz and his circle of renegade anthropologists used Columbia as a point of departure while traveling to the furthest reaches of the globe to develop their insights on human nature and human cultures.  While their insights varied, the four Columbia anthropologists all saw humanity as an indivisible whole.  They put into practice the notion that we can best understand other societies with a data driven methodology, where conclusions are always subject to refinement and change.  Social categories, such as race and gender, they agreed, should be considered artificial, the products of “human artifice, residing in the mental frameworks and unconscious habits of a given society” (p.10).  For all four, the “most enduring prejudices” were the “comfortable ones, those hidden up close; seeing the world as it is requires some distance, a view from the upper air” (p.345).

To the personal stories and professional thinking of Columbia’s renegade anthropologists, King deftly adds rich detail on their cohorts and contemporaries and the times in which they all lived.  The resulting work, written in a mellifluous style, is at once riveting yet surprisingly easy to understand – ample reason to scurry for a copy.

* * *

Franz Boas was born in 1856 into an assimilated Jewish family in Prussia, before Germany had become a unified country.  At age 28, he set out to study migration patterns of the Inuit, the indigenous people on Baffin Island in the Artic.  Boas actually lived among the Inuit people, a novelty for his time. When he put together his conclusions from his time on the island, he began using the German word Herzenbildung, the “training of one’s heart to see the humanity of another” (p.30), a notion that would shape his overall approach to anthropology over the next sixty years.

Boas immigrated to the United States in 1884, primarily to pursue his love interest in his future wife, Austrian American Marie Krackowizer.   Anthropology was then a term, King explains, that people were beginning to use for the combination of travel, artifact collection, language learning, and bone hunting.  But for Boas, anthropology was a data-driven discipline, a form of social science.  More than his peers, Boas emphasized the relationship between the data and the practitioner. “What counted as social scientific data – the specific observations that researchers jotted down in their field notes – was relative to the worldview, skill sets, and preexisting categories of the researchers themselves” (p.71).   A good anthropologist had to be committed to the critical refinement of his or her own experience in light of data gathered.  That was the “whole point of purposefully throwing yourself into the most foreign and remote of places. You had to gather things up before you refined them down” (p.247).

Boas’s penchant for following the data put him on what King describes as a “collision course with his adopted country’s most time-honored way of understanding itself, a cultural obsession that Europeans and Americans had learned to call race” (p.77).  In the late 19th and early 20th centuries, the concept of race was central to the field of anthropology, part of an “unshakable natural order” (p.79).  Humans had races in the same way that other animals had stocks or pedigrees.  A person’s lips, hair texture, nose or head shape, and skin tone all confirmed the multiplicity of human races, arranged in a sort of pyramid, with the white “races” of Northern and Western Europe and Protestant America at its apex.

Boas set forth his most comprehensive rejoinder to early 20th century race theories, purporting to be based on science, in his 1911 book, The Mind of Primitive Man.   Physical traits were a “poor guide to distinguishing advanced peoples from more backward ones” (p.100), Boas contended.  Not only was there “no bright line dividing one race from another, but the immense variation within racial categories called into question the utility of the concept itself” (p.101).  European success in exploiting resources in Africa and American success in settling the North American continent were not due to some inherent superiority on the part of the people typically called “civilized.”  Chance and time could be “equally good explanations for disparities in achievement” (p.100), he suggested.

Our ideas about race are themselves products of history, Boas implied, a “rationalization for something a group of people desperately want to believe”(p.106).  The pseudo-scientific racial theories that abounded in early 20th century Europe and America helped convince people that they are “higher, better and more advanced than some other group.  Race was how Europeans [and Americans] explained to themselves their own sense of privilege and achievement” (p.106).  For Boas, the spread of Europeans overseas during the age of exploration and the establishment of empires across the lands they conquered may have “cut short whatever material and cultural development had been in process there” (p.100).

Boas died in 1942, a time when racial theories emanating from his native Germany, then in the throes of Nazi rule, were being applied to exterminate Europe’s Jewish population.  On the day he died, he purportedly told a refugee from Nazi-occupied Paris, “We should never stop repeating the idea that racism is a monstrous error and an impudent lie” (p.316).  Among Boas’ disciples, Margaret Mead was considered his closest intellectual heir.  Through Mead, Boas’ core ideas “lived on and spread to a broader audience than Papa Franz ever could have dreamed” (p.338), King writes.

Mead, who grew up in a highly educated Philadelphia family, graduated in 1923 from Barnard, Columbia’s “sister” school.  From there, she became one of the first women to enroll in Papa Franz’ fiefdom,  Columbia’s graduate program in anthropology.  Under Boas’ guidance, Mead charted a “new way of doing anthropology itself” (p.148).  She “wanted to know about peoples’ lives: how they thought about childhood and aging, what it meant to be an adult, what they thought of as sexual pleasure, whom they loved, when they felt the sting of public humiliation or the gnawing sickness of private shame” (p.148).  What set Mead apart from her peers was that she determined to do this with the “invisible mass of people whom anthropologists . . . always seemed to miss – women and girls” (p.148).

After completing her dissertation, Boas suggested that Mead conduct first-hand field research, much as he had done as a young man on Baffin Island, and pointed her to American Samoa, a United States territory in the South Pacific.  Mead spent much of her time on three villages on the remote island of Ta’u.  The point of examining Samoa was to “see the schemes that people halfway around the world, in a very different environment, climate, and culture, had devised for rendering children into adults” (p.163).  To understand the lives, fears, passions, and worries of adolescent girls, Mead spent her time talking directly to them, the “true experts of the crisis of adolescence” (p.167).

The result of Mead’s study of adolescent girls in the Ta’u villages was Coming of Age in Samoa.  The book’s basic claim was that the Samoans of Ta’u “did not conceive of adolescence in precisely the same way that Americans tended to see it,” (p.167).  Samoan girls knew as much about sex as their counterparts in New York, probably more, Mead found.  But she observed no real sense of romantic love, inextricably linked in Western societies with monogamy, exclusiveness, jealousy, and undeviating fidelity.

Growing up in New Guinea, Mead’s sequel to Coming of Age in Samoa, appeared in 1930, before she was 30.  Given her frank discussions of sex and her “refusal to acknowledge the self-evident superiority of Western Civilization,” Mead was already considered an “outspoken, even scandalous public scientist” (p.185).  Seemingly overnight, she had become “one of the country’s foremost experts on the relevance of the most remote parts of the globe for understanding what was happening back home” (p.185).  From that point until her death in 1978, Mead was the “face of her discipline, the epitome of an engaged scholar,” even though other academics considered her “somehow outside the mainstream” (p.340).  King summarizes Mead’s core idea as a full recognition of women as human beings, “with the power to choose whatever social roles they wanted – mothers and caretakers as well as anthropologists and poets” (p.339).

As a young woman, Mead had enrolled in Columbia’s graduate program in anthropology at the urging of Ruth Benedict, fourteen years Mead’s senior and already a respected anthropologist.  Benedict served initially as Mead’s teacher, mentor and intellectual anchor.  Thereafter, their relationship evolved into something more intimate and decidedly more complicated.  But it was never quite the relationship Benedict hoped for.

Before she arrived at Columbia, Mead had married Luther Cressman, then a theology student and later an Episcopalian minister.  By the time Boas suggested she travel to American Samoa, Mead was having an affair with a prominent Canadian anthropologist, Edward Sapir, a former student of Boas, even though she was then finding herself increasingly attracted to Benedict.  Another dashing male lover later replaced Sapir, with Benedict serving as what might be unceremoniously described as Mead’s “backup.”  At least two other men subsequently swept Mead away. The players may have been different for Mead in the often cruel game of love, King writes, but it was always the same script, with Mead returning to Benedict until the next Mr. Right Now came along.  Mead’s enduring but erratic love for Benedict, King suggests, underscores her life-long inability to “settle down to one kind of relationship, whether with one person or with one gender” (p.258).

Benedict was always disappointed when the object of her affection moved from one man to the next (there’s no indication of other women in Mead’s life).  But she was herself a formidable anthropologist who rose to be Boas’ chief assistant at Columbia and was primed to become the department chairman upon his retirement, only to have the position given to a man from outside the university.  Unlike Mead, who was most interested in how individuals function within the structures that a given society constructs, Benedict was a “big picture” theorist, fashioning some of anthropology’s most sweeping insights about those structures.

In her signature work, Patterns in Culture, published in 1934, which King describes as arguably the most cited and most taught work of anthropological grand theory ever” (p.267), Benedict argued that real analysis of human societies starts with discarding prior assumptions that one’s own way of seeing the world is universal.  Paying attention to broad patterns enables one to grasp what makes a society “both different from all others and intrinsically meaningful to itself – its way of seeing social life, custom, and ritual, of defining the goals and pathways of life itself” (p.265).  All societies, each with its own coherence and sense of integration that “allows for individuals inside that society to find the way from childhood to adulthood,” Benedict argued, are “just snippets of a ‘great arc’ of possible ways of behaving” (p.264).

During World War II, while in Washington working at the Office of War Information, Benedict wrote her final book, The Chrysanthemum and the Sword.  Benedict was tasked with explaining Japan to America’s policy makers, part of an effort to understand the country’s enemy.  The standard view within the United States government was that the conflict in the Pacific, unlike that in Europe, was “nothing less than a struggle for racial dominance” (p.320).  The Japanese were considered inherently sneaky, treacherous, untrustworthy, and given to a fanatical allegiance to their country, whereas Germany was made up of essentially good people whose government had been hijacked by an evil clique.

Although she had no serious expertise in Japan, and no way to study Japanese culture first hand in wartime, Benedict aimed to counter the prevailing US government view of the Japanese.  The point of her title, The Chrysanthemum and the Sword, was that a society that had “delicate, refined ideas of beauty and creative expression could also value militarism, honor, and subservience” (p.327).  The work was made available to the general public in 1946.  In the years that followed, King notes, The Chrysanthemum and the Sword earned a “good claim to being the most widely read piece of anthropology ever written” (p.330).

Benedict wanted to go to Japan with the American occupation after the war, but was turned down as being too old and, likely, being female.  After an exhausting trip to Europe in 1948, Benedict, then age 61, died suddenly of a heart attack.  Over her long career,  King writes, Benedict  provided a “clearer definition than anyone before her of how social science could be its own design for living.”  She distilled what she had seen, where she had been, and what she was into a “code that was at once analytically sharp and deeply moral” (p.266).

While Zora Neale Hurston did not come close in her lifetime to achieving the high profile of Benedict and Mead, King suggests that this was due at least in part to the same racism that impeded all African-Americans in her time.  The “chasm of race,” he writes, “separated Hurston from the other members of the Boas circle, even at a time when Boas’ students were assiduously denying that race was a fundamental division in human societies” (p.293).

Hurston was born in Alabama but grew up in Central Florida.   All four of her grandparents had been slaves.   Like Mead, she enrolled at Barnard and from there found her way to Columbia’s anthropology department.  Simultaneously, Hurston became part of the African-American intellectual and cultural movement known as the Harlem Renaissance, a “sweeping experiment in redefining blackness in a country that had been built on defining it for you,” (p.193), as King puts it.  She became close to many of its leading luminaries, particularly the poet Langston Hughes.

Hurston returned to her native Central Florida at a time when Ku Klux Klan terror was widespread.  A “fully formed yet unappreciated recipe for living as a human being seemed to be lurking in the dense pinelands and lakeshores of northern and central Florida” (p.201-02), King writes.  More than Mead or Benedict, Hurston “found her calling in fieldwork,” (p.201).  No member of the Boas circle could claim to have gone as deeply as Hurston into the “lived experience of the people she was trying to understand” (p.292).

The result of Hurston’s  work in Florida was Mules and Men, published in 1935.  Mules and Men  marked an unprecedented effort to send the reader “deep inside southern black towns and work camps – not as an observer but as a kind of participant” (p.212).  Boas wrote the book’s preface, describing it as the first attempt to understand the “true inner life of the Negro” (p212).   Mules and Men confirmed the “basic humanity of people who were thought to have lost it, either because of some innate inferiority or because of the cultural spoilage produced by generations of enslavement” (p.214).

Mules and Men appeared the same year as another of Mead’s major studies, Sex and Temperament.  The critics did not view the two works in equal terms. “Volumes on Samoans or New Guineans were hailed as commentaries on the universal features of human society,” King observes, whereas one about African Americans in the American South was a “quaint bit of storytelling” (p.275).  Hurston subsequently spent time in Jamaica and Haiti, producing significant works on voodoo and folklore, while she also churned out essays, short stories and novels.  King derived his title from a deleted chapter in Hurston’s 1942 autobiography, Dust Tracks, where she wrote that the “gods of the upper air” had uncovered many new faces for her eyes.

Hurston died unheralded in 1960.  But in 1975, poet and novelist Alice Walker wrote an essay for Ms. magazine in which she recorded her efforts to retrace Hurston’s life journey.  Hurston, Walker wrote, was “one of the most significant unread authors in America.” (p.336). Walker’s essay marked the start of a Hurston revival that would “elevate her into the pantheon of great American writers, with an almost cult like following” (p.337). Today, King suggests, Hurston’s reputation arguably exceeds that of Langston Hughes and her other contemporaries of the Harlem Renaissance.

* * *

Boas and the Columbia anthropologists in his circle steered human knowledge in a remarkable direction, King concludes, “toward giving up the belief that all history leads inexorably to us” (p.343).  They deserve credit for expanding  the range of people who should be “treated as full, purposive, and dignified human beings” (p.343).  But Boas would be the first to admit that expansion of  that range remains a work in progress.


Thomas H. Peebles

La Châtaigneraie, France

December 1, 2020







Filed under American Society, European History, Gender Issues, Intellectual History, Science

German Lessons: Is Mississippi Learning?


Susan Neiman, Learning from the Germans:

Race and the Memory of Evil (Farrar, Straus & Giroux) 

Less than two months ago, protests and public demonstrations erupted on an unprecedented scale across the United States and throughout the world over the killing of African American George Floyd at the hands of a Minneapolis, Minnesota, police officer, captured on videotape.  Fueled by the movement known as “Black Lives Matter,” the protests and demonstrations that continue to this day focus most directly on police violence and reform of criminal justice practices.  But at a deeper level the protests also seek to call attention to the endurance of systemic racism in the United States, the subject that hovers over Susan Neiman’s thought-provoking Learning from the Germans: Race and the Memory of Evil,  giving her work a timeliness she probably never imagined when it first appeared last year.

To address systemic racism, Neiman argues, the United States needs to confront more directly and honestly the realities of its racist past: human bondage dating from the early 17th century which plunged the United States into a Civil War in the mid-19th century, followed by an additional century of legally enforced segregation, rampant discrimination, racial terrorism and second class citizenship, with official sanction of racial discrimination not ending until passage of the Civil Rights Act in 1964 and the Voting Rights Act the following year.  Neiman’s title, moreover, is a give away to her surprising suggestion that Americans can learn much from how Germany finally confronted its own racist past, specifically the Holocaust, Nazi Germany’s project to exterminate Europe’s Jewish population that it perpetrated over a 12-year period, from 1933 to 1945.

When Neiman looked at the contentious issue of monuments honoring Southern Civil War veterans from the perspective of Germany, where she has lived on and off since 1982, she found it hard to “imagine a Germany filled with monuments to the men who fought for the Nazis.  My imagination failed. For anyone who has lived in contemporary Germany, the vision of statutes honoring those men is inconceivable.” Germans who lost family members during World War II realize that their loved ones “cannot be publicly honored without honoring the cause for which they died” (p.267).  In the United States, by contrast, the president and a substantial if declining portion of the public still support maintaining statutes and memorials honoring the cause of the Southern Confederacy, a reflection of the broader differences between the two countries in coming to terms with their racist pasts that Neiman seeks to highlight.

Learning from the Germans is not an attempt to compare the evils of slavery and discrimination against African Americans in the United States to those of the murder of Jews and others during the Holocaust, an exercise Neiman considers fruitless.  Rather, her work revolves around what might be characterized as “comparative atonement,” for which she uses her preferred if foreboding German word, Vergangenheitsaufarbeitung, translated into English as “working off the past.”  The word came into use in German in the 1960s as an “abstract polysyllable way of saying We have to do something about the Nazis” (p.30).  In atoning for its racist past, Germany is markedly further down the path to Vergangenheitsaufarbeitung than the United States, Neiman argues, but she also emphasizes how East Germany, when it existed and despite its many faults, was further along this path than West Germany.  Only after German reunification in 1990 did efforts of the former West Germany to atone for its racist crimes begin to gather serious momentum.

The first part of Neiman’s three-part work, “German Lessons,” outlines Germany’s attempt to come to terms with its crimes of the Nazi period, both before and after unification.  The second part, “Southern Discomfort,” looks at the legacy of racism in the American Deep South, heavily concentrated on the state of Mississippi and on the persistence of the notion of the Lost Cause, a romanticized version of the American Civil War that insists that the war was fought not over slavery but over “states’ rights” — an “abstract phrase that veils the question of what, exactly, Southern states thought they had a right to do” (p.186).  In her third part, “Setting Things Straight,” Neiman considers in broad terms how the American South and the United States as a whole can make strides in coming to terms with a racist past, with the German experience serving as a partial guide.  But this part is a more an invitation to debate than a provision of definitive answers.

Neiman, a Jewish American with no direct family connection to the Holocaust, was raised in the American South, in Atlanta, Georgia.  A philosopher by training who studied at Harvard under John Rawls and taught at Yale, she is today the Director of the Einstein Forum, a German think tank located in Potsdam, just outside Berlin.  After nearly a quarter century living and working in Germany, Neiman spent a year at the Winter Institute for Racial Reconstruction in Oxford, Mississippi, a forward-looking institution dedicated explicitly to encouraging people to “honestly engage in their history in order to live more truthfully in the present, where the inequities of the past no longer dictate the possibilities of the future” (p.143).  Utilizing these diverse professional and personal experiences, she mixes analysis and anecdote while introducing her readers to an impressive array of Germans and Americans working on what might be described as the front lines of Vergangenheitsaufarbeitung in their respective countries.

Although her analysis of the United States concentrates on the state of Mississippi, Neiman recognizes that Mississippi is hardly representative of the United States as a whole, and not even of the states of the former Confederacy.  But she contends that awareness of history is arguably more acute in Mississippi than anywhere else in United States.  “Focusing on the Deep South,” moreover, is “not a matter of ignoring the rest of the country, but of holding a magnifying glass to it” (p.17-18), she writes.   Although just about everyone in the United States now accepts that slavery was wrong, , the “national sense of shame” which she finds in today’s Germany is “entirely absent” in the United States; shame  is “not the American way” (p.268).

During the nearly three years that Neiman worked on her book, many of the Germans  she met with laughed at her proposed title and rejected the idea that Germany had anything to teach Americans about dealing with their racist past.  Most Germans today are defensive about their country’s efforts to work toward Vergangenheitsaufarbeitung, she observes.  They think they  took way too long to transition from looking at themselves as victims, with some adding  that many of their fellow citizens never made the transition.  “Good taste,” she writes, “prevents good Germans from anything that could possibly be construed as boasting about repentance” (p.56).   Neiman sees this widespread defensiveness as “itself a sign of how far Germany has come in taking responsibility for its criminal history” (p.17).  But how Germany arrived at this position is not easy to pinpoint.

* * *

Competitive victimhood, Neiman writes, “may be as close to a universal law of human nature as we’re ever going to get.”   Postwar Germany was no less inclined than the defeated American South to participate in this “old and universal sport”(p.63).   Although 80 years separate the defeat of the American South from that of Nazi Germany, Neiman perceives similar litanies: “the loss of their bravest sons, the destruction of their homes, the poverty and hunger that followed – combined with resentment at occupying forces they regarded as generally loutish, who had the gall to insist their suffering was deserved” (p.63).  For decades after World War II, Germans were “obsessed with the suffering they’d endured, not the suffering they’d caused” (p.40).

In the immediate aftermath of World War II, the United States, Britain and France, the occupying powers in what became West Germany, aimed to institute a process of de-Nazification.  Among its many aims, de-Nazification was supposed to purge former Nazis and Nazi sympathizers from positions of influence.  More broadly, as Frederick Taylor argued in Exorcising Hitler: The Occupation and Denazification of Germany (reviewed here in December 2012), de-Nazification was “perhaps the most ambitious scheme to change a nation’s psyche ever mounted in human history.” But de-Nazification was a failed scheme.  West Germans mocked the Allied attempt to impose a change of consciousness.

The Allies, moreover, lacked the resources to make de-Nazification successful, and Cold War rivalries and realities intruded.  The Allies were “far more interested in securing [German] allies against the Soviet Union than in digging up their sordid pasts” (p.99).  The de-Nazification program was turned over to the West German government, which had “no inclination to pursue it” (p.99).  Well into the 1960s, West German commitments to democratic governance were “precarious, and the possibility of a return to a sanitized Nazism could not be ruled out” (p.55).  The implicit message of Konrad Adenauer, West Germany’s first post-war chancellor, seemed to be: behave yourself, don’t call attention to your past, and we won’t look too deeply into that past.

East Germany worked off its Nazi past differently.  Although its official name was the German Democratic Republic (GDR), there was little that was democratic about East Germany.  Its borders were closed, its media heavily censored, and its elections a national joke.  Yet, East German leaders had been by and large genuinely anti-fascist, anti-Nazi during the war; the same cannot be said of West German leaders.  East Germany put far more old Nazis on trial proportionately and convicted more than in the West.  The West never invited Jewish émigrés to return; the East did.  Overall, Neiman concludes, East Germany quite simply “did a better job of working off the Nazi past than West Germany” (p.81).

It was not until around 1968 that West Germany began to get serious aboutVergangenheitsaufarbeitung, embarking on a path out of denial in conjunction with the student protests that roiled Europe and the United States that year.  Because their parents could not “mourn, acknowledge responsibility, or even speak about the war” (p.70), the 68ers, as the generation born in the 1940s was called, felt compelled to confront their parents over their war experiences and their subsequent silence about those experiences.  A decade later, the American TV series “Holocaust” served as a catalyst for “public discussion of the Holocaust that had been missing for decades” (p.370-71).  Then, on May 8, 1985, 40 years after Germany’s surrender, West German president Richard von Weizsäcker made headlines when he termed that day one of liberation. Up to that point, May 8 in West Germany had been called the Day of Defeat or Day of Unconditional Surrender (yet Weizsäcker even then symbolized the ambivalence of West German Vergangenheitsaufarbeitung: his father had been a high-level Nazi, an assistant to Foreign Minister Joachim von Ribbentrop; Weizsäcker defended his father at the post-war Nuremberg trials and always maintained that his father was trying only to make a bad situation better).

By the time of reunification in 1990, expressions of pro-Nazi sentiment had become “socially unacceptable” and have since become “morally unacceptable” (p.311).  A 1995 exhibit on the Wehrmacht, the Nazi army with 18 million members, demonstrated convincingly that it had systematically committed war crimes,  thereby breaking West Germany’s “final taboo” (p.24).  The exhibit was extended to 33 different cities in Germany and Austria and “ignited media discussions, filled talk shows, and eventually provoked a debate in parliament” (p.24).

Today, the right-wing Alternative for Germany, AfD in German, continues to rise in influence in Germany on an anti-immigrant platform many consider neo-Nazi.  Germany gained further unwanted attention earlier this month when Nazi sympathizers were revealed to  have infiltrated an elite German security unit:

But today’s Germany has nonetheless reached the point where “open expressions of racism are politically ruinous,” Neiman concludes, which may be the “best outcome we can hope for and it may also be enough. . . Very often, social change begins with lip service” (p.310-11).

* * *

As in Germany, Neiman observes, “the War” throughout the American South is a singular reference.  “Everybody knows that one was decisive, and its repercussions are with us today.”  This knowledge is “more conscious in a Deep South that was occupied, and almost as devastated as Germany, than in the rest of the United States” (p.37).  But the Lost Cause narrative that arose in the American South was an exercise in Civil War historical revisionism that flourished toward end of the 19th and into the early 20th century, in which the war was rebranded as a “noble fight for Southern freedom,” with the post-war Reconstruction period becoming a “violent effort by ignorant ex-slaves and mercenary Yankees to debase the honor of the South in general, and its white women in particular” (p.181).

Reconciliation under the Lost Cause mythology was “between white members of the opposing armies” to be achieved by “valorizing the defeated, and ignoring the cause for which they fought” (p.182). Reconciliation between white and black folk was not on the agenda.  Slowly and hazily, the Lost Cause narrative “came to capture the hearts of the North. Weary of war, eager for reconciliation, and keen to get on with the business of industrialization that was changing the American economy, Northerners conceded most of the mythmaking to the South. Not many had been enthusiastic abolitionists anyway” (p.186-87).

The Winter Institute, where Neiman conducted much of the research for this book, has sought to counter the Lost Cause narrative through such institutional reforms as creating and implementing school criteria on human rights, fostering inter-racial dialogue in communities known for racial violence, and promoting academic investigation and scholarship on patterns and legacies of racial inequities.   What keeps the Winter Institute going is the notion that “if you can change Mississippi communities, you can probably change anything” (p.142).  The primary lesson Neiman derived from her time at the Winter Institute: “national reconciliation begins at the bottom. Very personal encounters between members of different races, people who represent the victims as well as those who represent the perpetrators, are the foundation of any larger attempt to treat national wounds . . . It is a long and weary process, but it is hard to see an alternative” (p.301).

Neiman discusses at length two notorious murderous acts in mid-20th century Mississippi: the 1955 murder of Emmitt Till, a 14-year-old Chicago boy brutally killed during a summer visit to Mississippi; and the murders of  Andrew Goodman, Mickey Schwerner, and James Chaney, three civil rights workers, two young white men from New York and a black man from Mississippi, killed near Philadelphia, Mississippi during the following decade while organizing African-Americans to exercise their right to vote.  The two men tried for the Till murder were promptly acquitted.  Protected   by the Double Jeopardy Clause of the United States Constitution, they thereafter took money from Look magazine to confess that they had killed the teenager.  No trial at all ensued in the immediate aftermath of the killing of Goodman, Schwerner, and Chaney.

While the world knew the story of the ghastly Till murder, for decades nobody in the local Mississippi Delta community, black or white, wanted to talk about it.  Neiman sees a similarity to the silence that prevailed in Germany in the first decades after the war, where to both non-Jewish and Jewish families, “anything connected to the war was off-limits.  Neither side could bear to talk about it, one side afraid of facing its own guilt, the other afraid of succumbing to pain and rage” (p.217).

In 1989, the Mississippi Secretary of State issued a public apology to the families of the three slain civil rights workers, the first local white man to publicly acknowledge the crime.  Most Mississippians think that is the reason he lost when he ran for governor the following year.  With a strong push from the Winter Institute, a trial in the case finally took place in in 2005.  The prime suspect, Edgar Ray Killen, then 80 years old, was convicted, but only of manslaughter.  Killen received three 20-year sentences and died in prison in 2018.  Neiman wonders whether the trial has helped a “healing process” or allowed Mississippi to “rest in the self-satisfaction that the horrors that stigmatized the state all belonged to the past” (p.301).

* * *

In her final section, Neiman runs through the most common arguments against reparations to descendants of victims of slavery, and proffers counter arguments.  She glosses over what in my mind is the most difficult: how to determine who gets what amount.   She notes that West Germany paid Israel what amounted to reparations early in the history of the two states, the “price for acceptance into the Western Community and the price was relatively cheap . . . Reparations were paid in exchange for world recognition and the opportunity to keep silent about the quantity of Nazis, and Nazi thinking, that permeated the Federal Republic” (p.289).  Iin the United States, she argues, reparations  need not take the form of precise compensation to individual African Americans but should be the subject of public debate.

On the current polemic surrounding statutes and memorials honoring Confederate war veterans,  Neiman reminds her readers that most were erected in the early part of the 20th century with the express purpose of reinforcing and providing legitimacy to the regime of rigid segregation and discrimination.  They should not be seen as “innocuous shrines to history; they were provocative assertions of white supremacy at moments when its defenders felt under threat.  Knowing when they were built is part of knowing why they were built. . What is at stake is not the past, but the present and the future. When we choose to memorialize a historical moment, we are choosing the values we want to defend, and pass on” (p.263).

* * *

“Forgetting past evils may be initially safer,” Neiman writes, but in the long run, the “dangers of forgetting are greater than the dangers of remembering — provided, of course, that we use the failures of past attempts to learn how to do it better” (p.373).  Although there is no single pathway to Vergangenheitsaufarbeitung, understanding the distance Germany has traveled in coming to terms with the Nazi era’s racist crimes should benefit Americans yearning to find a better pathway in the turbulent aftermath of the George Floyd killing.

Thomas H. Peebles

La Châtaigneraie, France

July 29, 2020



Filed under American Politics, American Society, German History, History, Politics, United States History

Reading Darwin in Abolitionist New England


Randall Fuller, The Book That Changed America:

How Darwin’s Theory of Evolution Ignited a Nation (Viking)

In mid-December 1859, the first copy of Charles Darwin’s On the Origin of Species arrived in the United States from England at a wharf in Boston harbor.  Darwin’s book explained how plants and animals had developed and evolved over multiple millennia through a process Darwin termed “natural selection,” a process which distinguished On the Origins of Species from the work of other naturalists of Darwin’s generation.   Although Darwin said little in the book about how humans fit into the natural selection process, the work promised to ignite a battle between science and religion.

In The Book That Changed America: How Darwin’s Theory of Evolution Ignited a Nation, Randall Fuller, professor of American literature at the University of Kansas, contends that what made Darwin’s insight so radical was its “reliance upon a natural mechanism to explain the development of species.  An intelligent Creator was not required for natural selection to operate.  Darwin’s’ vision was of a dynamic, self-generation process of material change.  That process was entirely arbitrary, governed by physical law and chance – and not leading ineluctably . . . toward progress and perfection” (p.24).  Darwin’s work challenged the notion that human beings were a “separate and extraordinary species, differing from every other animal on the planet. Taken to its logical conclusion, it demolished the idea that people had been created in God’s image” (p.24).

On the Origins of Species arrived in the United States at a particularly fraught moment.  In October 1859, abolitionist John Brown had conducted a raid on a federal arsenal in Harper’s Ferry (then part of Virginia, today West Virginia), with the intention of precipitating a rebellion that would eradicate slavery from American soil.  The raid failed spectacularly: Brown was captured, tried for treason and hung on December 2, 1859.  The raid and its aftermath exacerbated tensions between North and South, further polarizing the already bitterly divided country over the issue of chattel slavery in its southern states.  Notwithstanding the little Darwin had written about how humans fit into the natural selection process, abolitionists seized on hints in the book that all humans were biologically related to buttress their arguments against slavery.  To the abolitionists, Darwin “seemed to refute once and for all the idea that African American slaves were a separate, inferior species” (p.x).

Asa Gray, a respected botanist at Harvard University and a friend of Darwin, received the first copy of On the Origin of Species in the United States.  He passed the copy, which he annotated heavily, to his cousin by marriage  Charles Loring Brace (who was also a distant cousin of Harriet Beecher Stowe, author of the anti-slavery runaway best-seller Uncle Tom’s Cabin).  Brace in turn introduced the book to three men: Franklin Benjamin Sanborn, a part-time school master and full-time abolitionist activist; Amos Bronson Alcott, an educator and loquacious philosopher, today best remembered as the father of author Louisa May Alcott; and Henry David Thoreau, one of America’s best known philosophers and truth-seekers.  Sanborn, Alcott and Thoreau were residents of Concord, Massachusetts, roughly twenty miles north of Boston, the site of a famous Revolutionary War battle but in the mid-19th century both a leading literary center and a hotbed of abolitionist sentiment.

As luck would have it, Brace, Alcott and Thoreau gathered at Sanborn’s Concord home on New Year’s Day 1860.  Only Gray did not attend. The four men almost certainly shared their initial reactions to Darwin’s work.   This get together constitutes the starting point for Fuller’s engrossing study, centered on how Gray and the four men in Sanborn’s parlor on that New Year’s Day  absorbed Darwin’s book.   Darwin himself is at best a background figure in the study.  Several familiar figures make occasional appearances, among them:  Frederick Douglass, renowned orator and “easily the most famous black man in America” (p.91); Bronson Alcott’s author-daughter Louisa May; and American philosophe Ralph Waldo Emerson, Thoreau’s mentor and friend.  Emerson, like Louisa May and her father, was a Concord resident, and Fuller’s study takes place mostly there, with occasional forays to nearby Boston and Cambridge.

Fuller’s study is therefore more tightly circumscribed geographically than its title suggests.  He spends little time detailing the reaction to Darwin’s work in other parts of the United States, most conspicuously in the American South, where any work that might seem to support abolitionism and undermine slavery was anathema.   The study is also circumscribed in time; it takes place mostly in 1860, with most of the rest confined to the first half of the 1860s, up to the end of the American Civil War in 1865.  Fuller barely mentions what is sometimes called “Social Darwinism,” a notion that gained traction in the decades after the Civil War that purported to apply Darwin’s theory of natural selection to the competition between individuals in politics and economics, producing an argument for unregulated capitalism.

Rather, Fuller charts out the paths each of his five main characters traversed in absorbing and assimilating into their own worldviews the scientific, religious and political ramifications of Darwin’s work, particularly during the tumultuous year 1860.   All five were fervent abolitionists.   Sunburn was a co-conspirator in John Brown’s raid.  Thoreau gave a series of eloquent, impassioned speeches in support of Brown.  All were convinced that Darwin’s notion of natural selection had provided still another argument against slavery, based on science rather than morality or economics.  But in varying degrees, all five could also be considered adherents of transcendentalism, a mid-19th century philosophical approach that posited a form of human knowledge that goes beyond, or transcends, what can be seen, heard, tasted, touched or felt.

Although transcendentalists were almost by definition highly individualistic, most believed that a special force or intelligence stood behind nature and that prudential design ruled the universe.  Many subscribed to the notion that humans were the products of some sort of “special creation.”   Most saw God everywhere, and considered the human mind “resplendent with powers and insights wholly distinct from the external world” (p.54).  Transcendentalism was both an effort to invoke the divinity within man and, as Fuller puts it, also “cultural attack on a nation that had become too materialistic, too conformist, too smug about its place in history” (p.66).

Transcendentalism thus hovered in the background in 1860 as all but Sanborn wrestled with the implications of Darwinism (Sanborn spent much of the year fleeing federal authorities seeking his arrest for his role in John Brown’s raid).  Alcott never left transcendentalism, rejecting much of Darwinism.  Gray and Brace initially seemed to embrace Darwinian theories wholeheartedly, but in different ways each pulled back once he fully grasped the full implications of those theories.   Thoreau was the only one of the five who accepted wholly Darwinism’s most radical implications, using Darwin’s theories to “redirect his life’s work” (p.ix).

Fuller’s study thus combines a deep dive into the New England abolitionist milieu at a time when the United States was fracturing over the issue of slavery with a medium level dive into the intricacies of Darwin’s theory of natural selection.   But the story Fuller tells is anything but dry and abstract.  With an elegant writing style and an acute sense of detail, Fuller places his five men and their thinking about Darwin in their habitat, the frenetic world of 1860s New England.  In vivid passages, readers can almost feel the chilly January wind whistling through Franklin Sanborn’s parlor that New Year’s Day 1860, or envision the mud accumulating on Henry David Thoreau’s boots as he trudges through the melting snow in the woods on a March afternoon contemplating Darwin.  The result is a lively, easy-to-read narrative that nimbly mixes intellectual and everyday, ground-level history.

* * *

Bronson Alcott, described by Fuller as America’s most radical transcendentalist, never accepted the premises of On the Origins of Species.  Darwin had, in Alcott’s view, “reduced human life to chemistry, to mechanical processes, to vulgar materialism” (p.10).  To Alcott, Darwin seemed “morbidly attached to an amoral struggle of existence, which robbed humans of free will and ignored the promptings of the soul” (p.150). Alcott could not imagine a universe “so perversely cruel as to produce life without meaning.  Nor could he bear to live in a world that was reduced to the most tangible and daily phenomena, to random change and process”(p.188).  Asa Gray, one of America’s most eminent scientists, came to the same realization, but  only after thoroughly digesting Darwin and explaining his theories to a wide swath of the American public.

Gray’s initial reaction to Darwin’s work was one of unbounded enthusiasm.  Gray covered nearly every page of the book with his own annotations.  He admired the book because it “reinforced his conviction that inductive reasoning was the proper approach to science” (p.109).  He also admired the work’s “artfully modulated tone, [and] its modest voice, which softened the more audacious ideas rippling through the text” (p.17). Gray was most impressed with Darwin’s “careful judging and clear-eyed balancing of data” (p.110).  To grapple with Darwin’s ideas, Gray maintained, one had to “follow the evidence wherever it led, ignoring prior convictions and certainties or the narrative one wanted that evidence to confirm” (p.110).  Without saying so explicitly, Gray suggested that readers of Darwin’s book had to be “open to the possibility that everything they had taken for granted was in fact incorrect” (p.110).

Gray reviewed On the Origins of Species for the Atlantic Monthly in three parts, appearing  in the summer and fall of 1860.  Gray’s articles served as the first encounter with Darwin for many American readers.  The articles elicited a steady stream of letters from respectful readers.  Some responded with “unalloyed enthusiasm” for a new idea which “seemed to unlock the mysteries of nature” (p.134).  Others, however, “reacted with anger toward a theory that proposed to unravel . . . their belief in a divine Being who had placed humans at the summit of creation” (p.134).  But as Gray finished the third Atlantic article, he began to realize that he himself was not entirely at ease with the diminution of humanity’s place in the universe that Darwin’s work implied.

The third Atlantic article, appearing in October 1860, revealed Gray’s increasing difficulty in “aligning Darwin’s theory with his own religions convictions” (p.213).   Gray proposed that natural selection might be the “God’s chosen method of creation” (p.214).  This idea seemed to resolve the tension between scientific and religious accounts of origins, making Gray the first to develop a theological case for Darwinian theory.  But the idea that natural selection might be the process by which God had fashioned  the world represented what Fuller describes as a “stunning shift for Gray. Before now, he had always insisted that secondary causes were the only items science was qualified to address.  First, or final causes – the beginning of life, the creation of the universe – were the purview of religion: a matter of faith and metaphysics” (p.214).  Darwin responded to Gray’s conjectures by indicating that, as Fuller summarizes the written exchange, the natural world was “simply too murderous and too cruel to have been created by a just and merciful God” (p.211).

In the Atlantic articles, Fuller argues, Gray leapt “beyond his own rules of science, speculating about something that was untestable” (p.214-15 ).  Gray must have known that his argument “failed to adhere to his own definition of science” (p.216).  But, much like Bronson Alcott, Gray found it “impossible to live in the world Darwin had imagined: a world of chance, a world that did not require a God to operate” (p.216).  Charles Brace, a noted social reformer who founded several institutions for orphans and destitute children, greeted Darwin’s book  with an initial enthusiasm that rivaled that of Gray.

Brace  claimed to have read On the Origins of Species 13 times.  He was most attracted to the book for its implications for human societies, especially for American society, where nearly half the country accepted and defended human slavery.  Darwin’s book “confirmed Brace’s belief that environment played a crucial role in the moral life of humans” (p.11), and demonstrated that every person in the world, black, white, yellow, was related to every one else.  The theory of natural selection was thus for Brace the “latest argument against chattel slavery, a scientific claim that could be used in the most important controversy of his time, a clarion call for abolition” (p.39).

Brace produced a tract entitled The Races of the Old World, modeled after Darwin’s On the Origin of Species, which Fuller describes as a “sprawling, ramshackle work” (p.199).  Its central thesis was simple enough: “There is nothing . . . to prove the negro radically different from the other families of man or even mentally inferior to them” (p.199-200).  But much of The Races of the Old World seemed to undercut Brace’s central thesis.  Although the book never defined the term “race,” Brace “apparently believed that though all humans sprang from the same source, some races had degraded over time . . . Human races were not permanent” (p.199-200).  Brace thus struggled to make Darwin’s theory fit his own ideas about race and slavery. “He increasingly bent facts to fit his own speculations” (p.197), as Fuller puts it.

The Races of the Old World revealed Brace’s hesitation in imagining a multi-racial America. He couched in Darwinian terms the difficulty of the races cohabiting,  reverting to what Fuller describes as nonsense about blacks not being conditioned to survive in the colder Northern climate.  Brace “firmly believed in the emancipation of slaves, and he was equally convinced that blacks and white did not differ in their mental capacities” (p.202).  But he nonetheless worried that “race mixing,” or what was then termed race “amalgamation,” might imperil Anglo-Saxon America, the “apex of development. . . God’s favored nation, a place where democracy and Christianity had fused to create the world’s best hope” (p.202).  Brace joined many other leading abolitionists in opposing race “amalgamation.”  His conclusion that “black and brown-skinned people inhabited a lower run on the ladder of civilization” was shared, Fuller indicates, by “even the most enlightened New England abolitionists” (p.57).

No such misgivings visited Thoreau, who  grappled with On the Origins of Species “as thoroughly and as insightfully as any American of the period” (p.11).  As Thoreau first read his copy of the book in late January 1860,  a “new universe took form on the rectangular page before him” (p.75).  Prior to his encounter with Darwin, Thoreau’s thought had often “bordered on the nostalgic.  He longed for the transcendentalist’s confidence in a natural world infused with spirit” (p.157).  But Darwin led Thoreau beyond nostalgia.

Thoreau was struck in particular by Darwin’s portrayal of the struggle among species as an engine of creation.  The Origin of Species revealed nature as process, in constant transformation.  Darwin’s book directed Thoreau’s attention “away from fixed concepts and hierarchies toward movement instead” (p.144-45).  The idea of struggle among species “undermined transcendentalist assumptions about the essential goodness of nature, but it also corroborated many of Thoreau’s own observations” (p.137).  Thoreau had “long suspected that people were an intrinsic part of nature – neither separate nor entirely alienated from it” (p.155).  Darwin now enabled Thoreau to see how “people and the environment worked together to fashion the world,” providing a “scientific foundation for Thoreau’s belief that humans and nature were part of the same continuum” (p.155).

Darwin’s natural selection, Thoreau wrote, “implies a greater vital force in nature, because it is more flexible and accommodating, and equivalent to a sort of constant new creation” (p.246).  The phrase “constant new creation” in Fuller’s view represents an “epoch in American thought” because it “no longer relies upon divinity to explain the natural world” (p.246).  Darwin thus propelled Thoreau to a radical vision in which there was “no force or intelligence behind Nature, directing its course in a determined and purposeful manner.  Nature just was” (p.246-47).

How far Thoreau would have taken these ideas is impossible to know. He became sick in December 1860, stricken with influenza, exacerbated by tuberculosis, and died in June 1862, with Americans fighting other Americans on the battlefield over the issue of slavery.

* * *

            Fuller compares Darwin’s On the Origin of Species to a Trojan horse.  It entered American culture “using the newly prestigious language of science, only to attack, once inside, the nation’s cherished beliefs. . . With special and desolating force, it combated the idea that God had placed humans at the peak of creation” (p.213).  That the book’s attack did not spare even New England’s best known abolitionists and transcendentalists demonstrates just how unsettling the attack was.

Thomas H. Peebles

La Châtaigneraie, France

May 18, 2020



Filed under American Society, History, Political Theory, Religion, Science, United States History

The Power of Human Rights


Samantha Power, The Education of an Idealist:

A Memoir 

By almost any measure, Samantha Power should be considered an extraordinary American success story. An immigrant from Ireland who fled the Emerald Isle with her mother and brother at a young age to escape a turbulent family situation, Power earned degrees from Yale University and Harvard Law School, rose to prominence in her mid-20s as a journalist covering civil wars and ethnic cleaning in Bosnia and the Balkans, won a Pulitzer Prize for a book on 20th century genocides, and helped found the Carr Center for Human Rights Policy at Harvard’s Kennedy School of Government, where she served as its executive director — all before age 35.  Then she met an ambitious junior Senator from Illinois, Barack Obama, and her career really took off.

Between 2009 and 2017, Power served in the Obama administration almost continually, first on the National Security Council and subsequently as Ambassador to the United Nations.  In both capacities, she became the administration’s most outspoken and influential voice for prioritizing human rights, arguing regularly for targeted United States and multi-lateral interventions to protect individuals from human rights abuses and mass atrocities, perpetrated in most cases by their own governments.  In what amounts to an autobiography, The Education of an Idealist: A Memoir, Power guides her readers through  the major foreign policy crises of the Obama administration.

Her life story, Power tells her readers at the outset, is one of idealism, “where it comes from, how it gets challenged, and why it must endure” (p.xii).  She is quick to emphasize that hers is not a story of how a person with “lofty dreams” about making a difference in the world came to be “’educated’ by the “brutish forces” (p.xii) she encountered throughout her professional career.  So what then is the nature of the idealist’s “education” that provides the title to her memoir?  The short answer probably lies in how Power learned to make her idealistic message on human rights both heard and effective within the complex bureaucratic structures of the United States government and the United Nations.

But Power almost invariably couples this idealistic message with the view that the promotion and protection of human rights across the globe is in the United States’ own national security interests; and that the United States can often advance those interests most effectively by working multi-laterally, through international organizations and with like-minded states.  The United States, by virtue of its multi-faceted strengths – economic, military and cultural – is in a unique position to influence the actions of other states, from its traditional allies all the way to those that inflict atrocities upon their citizens.

Power acknowledges that the United States has not always used its strength as a positive force for human rights and human betterment – one immediate example is the 2003 Iraq invasion, which she opposed. Nevertheless, the United States retains a reservoir of credibility sufficient to be effective on human rights matters when it choses to do so.   Although Power is sometimes labeled a foreign policy “hawk,” she recoils from that adjective.  To Power, the military is among the last of the tools that should be considered to advance America’s interests around the world.

Into this policy-rich discussion, Power weaves much detail about her personal life, beginning with her early years in Ireland,  the incompatibilities between her parents that prompted her mother to take her and her brother to the United States when she was nine, and her efforts as a schoolgirl to become American in the full sense of the term. After numerous failed romances, she finally met Mr. Right, her husband, Harvard Law School professor Cass Sunstein (who also served briefly in the Obama administration). The marriage gave rise to a boy and a girl with lovely Irish names, Declan and Rían, both born while Power was in government.  With much emphasis upon her parents, husband, children and family life, the memoir is also a case study of how professional women balance the exacting demands of high-level jobs with the formidable responsibilities attached to being a parent and spouse.  It’s a tough balancing act for any parent, but especially for women, and Power admits that she did not always strike the right balance.

Memoirs by political and public figures are frequently attempts to write one’s biography before someone else does, and Power’s whopping 550-page work seems to fit this rule.  But Power provides much candor  – a willingness to admit to mistakes and share vulnerabilities – that is often missing in political memoirs. Refreshingly, she also abstains from serious score settling.  Most striking for me is the nostalgia that pervades the memoir.  Power takes her readers down memory lane, depicting a now by-gone time when the United States cared about human rights and believed in bi- and multi-lateral cooperation to accomplish its goals in its dealings with the rest of the world – a time that sure seems long ago.

* * *

Samantha Jane Power was born in 1970 to Irish parents, Vera Delaney, a doctor, and Jim Power, a part-time dentist.  She spent her early years in Dublin, in a tense family environment where, she can see now, her parents’ marriage was coming unraveled.  Her father put in far more time at Hartigan’s, a local pub in the neighborhood where he was known for his musical skills and “holding court,” than he did at his dentist’s office.  Although young Samantha didn’t recognize it at the time, her father had a serious alcohol problem, serious enough to lead her mother to escape by immigrating to the United States with the couple’s two children, Samantha, then age nine, and her brother Stephen, two years younger. They settled in Pittsburgh, where Samantha at a young age set about to become American, as she dropped her Irish accent, tried to learn the intricacies of American sports, and became a fervent Pittsburgh Pirates fan.

But the two children were required under the terms of their parents’ custody agreement to spend time with her father back in Ireland. On her trip back at Christmas 1979, Samantha’s father informed the nine-year old that he intended to keep her and her brother with him.  When her mother, who was staying nearby, showed up to object and collect her children to return to the United States, a parental confrontation ensued which would traumatize Samantha for decades.  The nine year old found herself caught between the conflicting commands of her two parents and, in a split second decision, left with her mother and returned to the Pittsburgh. She never again saw her father.

When her father died unexpectedly five years later, at age 47 of alcohol-related complications, Samantha, then in high school, blamed herself for her father’s death and carried a sense of guilt with her well into her adult years. It was not until she was thirty-five, after many therapy sessions, that she came to accept that she had not been responsible for her father’s death.  Then, a few years later, she made the mistake of returning to Hartigan’s, where she encountered the bar lady who had worked there in her father’s time.   Mostly out of curiosity, Power asked her why, given that so many people drank so much at Hartigan’s, her father had been the only one who died. The bar lady’s answer was matter-of-fact: “Because you left” (p.192) — not what Power needed to hear.

Power had by then already acquired a public persona as a human rights advocate through her work as a journalist in the 1990s in Bosnia, where she called attention to the ethnic cleansing that was sweeping the country in the aftermath of the collapse of the former Yugoslavia.  Power ended up writing for a number of major publications, including The Economist, the New Republic and the Washington Post.   She was among the first to report on the fall of Srebrenica in July 1995, the largest single massacre in Europe since World War II, in which around 10,000 Muslim men and boy were taken prisoner and “seemed to have simply vanished” (p.102). Although the United States and its NATO allies had imposed a no-fly zone over Bosnia, Power hoped the Clinton administration would commit to employing ground troops to prevent further atrocities. But she did not yet enjoy the clout to have a real chance at making her case directly with the administration.

Power wrote a chronology of the conflict, Breakdown in the Balkans, which was later put into book form and attracted attention from think tanks, and the diplomatic, policy and media communities.  Attracting even more attention was  A Problem for Hell: America and the Age of Genocide, her book exploring  American reluctance to take action in the face of 20th century mass atrocities and genocides.  The book appeared in 2002, and won the 2003 Pulitzer Prize for General Non-Fiction.  It also provided Power with her inroad to Senator Barack Obama.

At the recommendation of a politically well-connected friend, in late 2004 Power sent a copy of the book to the recently elected Illinois Senator who had inspired the Democratic National Convention that summer with an electrifying keynote address.  Obama’s office scheduled a dinner for her with the Senator which was supposed to last 45 minutes.  The dinner went on for four hours as the two exchanged ideas about America’s place in the world and how, why and when it should advance human rights as a component of its foreign policy.  Although Obama considered Power to be primarily an academic, he offered her a position on his Senate staff, where she started working late in 2005.

Obama and Power would then be linked professionally more or less continually until the end of the Obama presidency in January 2017.   Once Obama enters the memoir, at about the one-third point, it becomes as much his story as hers. The two did not always see the world and specific world problems in the same way, but it’s clear that Obama had great appreciation both for Power’s intelligence and her intensity. He was a man who enjoyed being challenged intellectually, and plainly valued the human rights perspective that Power brought to their policy discussions even if he wasn’t prepared to push as far as Power advocated.

After Obama threw his hat in the ring for the 2008 Democratic Party nomination, Power became one of his primary foreign policy advisors and, more generally, a political operative. It was not a role that fit Power comfortably and it threatened to be short-lived.  In the heat of the primary campaign, with Obama and Hilary Clinton facing off in a vigorously contested battle for their party’s nomination, Power was quoted in an obscure British publication, the Scotsman, as describing Clinton as a “monster.” The right-wing Drudge Report picked up the quotation, whose accuracy Power does not contest, and suddenly Power found herself on the front page of major newspapers, the subject of a story she did not want.  Obama’s closest advisors were of the view that she would have to resign from the campaign.  But the candidate himself, who loved sports metaphors, told Power only that she would have to spend some time in the “penalty box” (p.187).  Obama’s relatively soft reaction was an indication of the potential he saw in her and his assessment of her prospective value to him if successful in the primaries and the general election.

Power’s time in the penalty box had expired when Obama, having defeated Clinton for his party’s nomination, won a resounding victory in the general election in November 2008.  Obama badly wanted Power on his team in some capacity, and the transition team placed her on the President’s National Security Council as principal deputy for international organizations, especially the United Nations.  But she was also able to carve out a concurrent position for herself as the President’s Senior Director for Human Rights.   In this portion of the memoir, Power describes learning the jargon and often-arcane skills needed to be effective on the council and within the vast foreign policy bureaucracy of the United States government.  Being solely responsibility for human rights, Power found that she had some leeway in deciding which issues to concentrate on and bring to the attention of the full Council.  Her mentor Richard Holbrook advised her that she could be most effective on subjects for which there was limited United States interest – pick “small fights,” Holbrook advised.

Power had a hand in a string of “small victories” while on the National Security Council: coaxing the United States to rejoin a number of UN agencies from which the Bush Administration had walked away; convincing President Obama to raise his voice over atrocities perpetrated by governments in Sri Lanka and Sudan against their own citizens; being appointed White House coordinator for Iraqi refugees; helping create an inter-agency board to coordinate the United States government’s response to war crimes and atrocities; and encouraging increased emphasis upon lesbian, gay, bi-sexual and transgender issues (LGBT) overseas.  In pursuit of the latter, Obama delivered an address at the UN General Assembly on LGBT rights, and thereafter issued a Presidential Memorandum directing all US agencies to consider LGBT issues explicitly in crafting overseas assistance (disclosure: while with the Department of Justice, I served on the department’s portion of the inter-agency Atrocity Prevention Board, and represented the department in inter-agency coordination on the President’s LGBT memorandum; I never met Power in either capacity).

But the Arab Spring that erupted in late 2010 and early 2011 presented  anything but small issues and resulted in few victories for the Obama administration.  A “cascade of revolts that would reorder huge swaths of the Arab world,” the Arab Spring ended up “impacting the course of Obama’s presidency more than any other geopolitical development during his eight years in office” (p.288), Power writes, and the same could be said for Power’s time in government.  Power was among those at the National Security Council who pushed successfully for United States military intervention in Libya to protect Libyan citizens from the predations of their leader, Muammar Qaddafi.

The intervention, backed by a United Nations Security Council resolution and led jointly by the United States, France and Jordan, saved civilian lives and contributed to Qaddafi’s ouster and death.  ButPresident Obama was determined to avoid a longer-term and more open-ended United States commitment, and the mission stopped short of the follow-up needed to bring stability to the country.  With civil war in various guises continuing to this day, Power suggests that the outcome might have been different had the United States continued its engagement in the aftermath of Qaddafi’s death.

Shortly after Power became US Ambassador to the United Nations, the volatile issue of an American military commitment arose again, this time in Syria in August 2013, when proof came irrefutably to light that Syrian leader Bashar al-Assad was using chemical weapons in his effort to suppress uprisings within the country.  The revelations came 13 months after Obama had asserted that use of such weapons would constitute a “red line” that would move him to intervene militarily in Syria.  Power favored targeted US air strikes within Syria.

Obama came excruciatingly close to approving such strikes.  He not only concluded that the “costs of not responding forcefully were greater than the risks of taking military action” (p.369), but was prepared to act without UN Security Council authorization, given the certainty of  a Russian veto of any Security Council resolution for concerted action.   With elevated stakes for “upholding the international norm against the use of chemical weapons” Power writes, Obama was “prepared to operate with what White House lawyers called a ‘traditionally recognized legal basis under international law’” (p.369).

But almost overnight, Obama decided that he needed prior Congressional authorization for a military strike in Syria, a decision taken seemingly with little effort to ascertain whether there was sufficient support in Congress for such a strike.  With neither the Congress nor the American public supporting military action within Syria to save civilian lives, Obama backed down.  On no other issue did Power see Obama as torn as he was on Syria,  “convinced that even limited military action would mire the United States in another open-ended conflict, yet wracked by the human toll of the slaughter.  I don’t believe he ever stopped interrogating his choices” (p.508).

Looking back at that decision with the passage of more than five years, Power’s disappointment remains palpable.  The consequences of inaction in Syria, she maintains, went:

beyond unfathomable levels of death, destruction, and displacement. The spillover of the conflict into neighboring countries through massive refugee flows and the spread of ISIS’s ideology has created dangers for people in many parts of the world. . . [T]hose of us involved in helping devise Syria policy will forever carry regret over our inability to do more to stem the crisis.  And we know the consequences of the policies we did choose. For generations to come, the Syrian people and the wide world will be living with the horrific aftermath of the most diabolical atrocities carried out since the Rwanda genocide (p.513-14).

But if incomplete action in Libya and inaction in Syria constitute major disappointments for Power, she considers exemplary the response of both the United States and the United Nations to the July 2014 outbreak of the Ebola virus that occurred in three West African countries, Guinea, Liberia and Sierra Leone.  United States experts initially foresaw more than one million infections of the deadly and contagious disease by the end of 2015.  The United States devised its own plan to send supplies, doctors and nurses to the region to facilitate the training of local health workers to care for Ebola patients, along with 3,000 military personnel to assist with on-the-ground logistics.  Power was able to talk President Obama out of a travel ban to the United States from the three impacted countries, a measure favored not only by Donald Trump, then contemplating an improbable run for the presidency, but also by many members of the President’s own party.

At the United Nations, Power was charged with marshaling global assistance.   She convinced 134 fellow Ambassadors to co-sponsor a Security Council resolution declaring the Ebola outbreak a public health threat to international peace and security, the largest number of co-sponsors for any Security Council resolution in UN history and the first ever directed to a public health crisis.  Thereafter, UN Member States committed $4 billion in supplies, facilities and medical treatments.  The surge of international resources that followed meant that the three West African countries “got what they needed to conquer Ebola” (p.455).  At different times in 2015, each of the countries was declared Ebola-free.

The most deadly and dangerous Ebola outbreak in history was contained, Power observes, above all because of the “heroic efforts of the people and governments of Guinea, Liberia and Sierra Leone” (p.456). But America’s involvement was also crucial.  President Obama provided what she describes as an “awesome demonstration of US leadership and capability – and a vivid example of how a country advances its values and interests at once” (p.438).  But the multi-national, collective success further illustrated “why the world needed the United Nations, because no one country – even one as powerful as the United States – could have slayed the epidemic on its own” (p.457).

Although Russia supported the UN Ebola intervention, Power more often found herself in an adversarial posture with Russia on both geo-political and UN administrative issues.  Yet, she used creative  diplomatic skills to develop a more nuanced relationship with her Russian counterpart, Vitaly Churkin.  Cherkin, a talented negotiator and master of the art of strategically storming out of meetings, valued US-Russia cooperation and often “pushed for compromises that Moscow was disinclined to make” (p.405).  Over time, Power writes, she and Churkin “developed something resembling genuine friendship” (p.406). But “I also spent much of my time at the UN in pitched, public battle with him” (p.408).

The most heated of these battles ensued after Russia invaded Ukraine in February 2014, a flagrant violation of international law. Later that year, troops associated with Russia shot down a Malaysian passenger jet, killing all passengers aboard.  In the UN debates on Ukraine, Power found her Russian counterpart “defending the indefensible, repeating lines sent by Moscow that he was too intelligent to believe and speaking in binary terms that belied his nuanced grasp of what was actually happening” (p.426). Yet, Power and Churkin continued to meet privately to seek solutions to the Ukraine crisis, none of which bore fruit.

While at the UN, Power went out of her way to visit the offices of the ambassadors of the smaller countries represented in the General Assembly, many of whom had never received  a United States Ambassador.  During her UN tenure, she managed to meet personally with the ambassadors from every country except North Korea.  Power also started a group that gathered the UN’s 37 female Ambassadors together one day a week for coffee and discussion of common issues.  Some involved  substantive matters that the UN had to deal with, but just as often the group focused on workplace matters that affected the women ambassadors as women, matters that their male colleagues did not have to deal with.

* * *

Donald Trump’s surprise victory in November 2016 left Power stunned.  His nativist campaign to “Make America Great Again” seemed to her like a “repudiation of many of the central tenets of my life” (p.534).  As an  immigrant, a category Trump seemed to relish denigrating, she “felt fortunate to have experienced many countries and cultures. I saw the fate of the American people as intertwined with that of individuals elsewhere on the planet.   And I knew that if the United States retreated from the world, global crises would fester, harming US interests” (p.534-35).  As Obama passed the baton to Trump in January 2017, Power left government.

Not long after, her husband suffered a near-fatal automobile accident, from which he recovered. Today, the pair team-teach courses at Harvard, while Power seems to have found the time for her family that proved so elusive when she was in government.  She is coaching her son’s baseball team and helping her daughter survey rocks and leaves in their backyard.  No one would begrudge Power’s quality time with her family. But her memoir will likely leave many readers wistful, daring to hope that there may someday  be room again for  her and her energetic idealism in the formulation of United States foreign policy.

Thomas H. Peebles

La Châtaigneraie, France

April 26, 2020


Filed under American Politics, American Society, Politics, United States History

School Girls on the Front Lines of Desegregation


Rachel Devlin, A Girl Stands in the Door:

The Generation of Young Women Who Desegregated America’s Schools

(Basic Books)

When World War II ended, public schools in the United States were still segregated by race throughout much of the country.  Segregated schools were mandated by state legislatures in all the states of the former Confederacy (“the Deep South”), along with Washington, D.C., Delaware and Arizona, while a handful of American states barred racial segregation in their public schools.  In the remainder, the decision whether to segregate was left to local jurisdictions.  Racial segregation of public schools found its constitutional sanction in Plessy v. Ferguson, the United States Supreme Court’s 1896 decision which held that equal protection of the law under the federal constitution did not prohibit states from maintaining public facilities that were “separate but equal.”

But “separate but equal” was a cruel joke, particularly as applied to public schools: in almost every jurisdiction which maintained segregated schools, those set aside for African-Americans were by every objective standard unequal and inferior to counterpart white schools.  In 1954, the Supreme Court, in one of its most momentous decisions, Brown v. Board of Education of Topeka, Kansas, invalidated the Plessy “separate but equal” standard as applied to public schools, holding that in the school context separate was inherently unequal.  The decision preceded by a year and a half the Montgomery, Alabama, bus boycott that made both Rosa Parks and Martin Luther King, Jr., household names.  The pathway leading to Brown was arguably the opening salvo in what we now term the modern Civil Rights Movement.

That pathway has been the subject of numerous popular and scholarly works, the best known of which is Richard Kluger’s magisterial 1975 work Simple Justice.  In Kluger’s account and most others, the National Association for the Advancement of Colored People (NAACP) and its Legal Defense Fund (LDF), which instituted Brown and several of its predecessor cases, are front and center, with future Supreme Court justice Thurgood Marshall, the LDF’s lead litigator, the undisputed lead character.  Yet, Rachel Devlin, an associate professor of history at Rutgers University, maintains that earlier studies of the school desegregation movement, including that of Kluger, overlook a critical point: the students who desegregated educational institutions – the “firsts,” to use Devlin’s phrase — were mostly girls and young women.

Devlin’s research revealed that only one of the early, post-World War II primary and secondary school desegregation cases that paved the way to the Brown decision was filed on behalf of a boy.  Looking at those who “attempted to register at white schools, testified in court, met with local white administrators and school boards, and talked with reporters from both the black and white press,” Devlin saw almost exclusively schoolgirls.  This disparity “held true in the Deep South, upper South, and Midwest” (p.x). After the Brown decision, the same pattern prevailed: “girls and young women vastly outnumbered boys as the first to attend formerly all-white schools” (p.x).

Unlike Kluger, Devlin does not focus on lawyers and lawsuits but rather on the “largely young, feminine work that brought school desegregation into the courts” (p.xi).  She begins with court challenges to state enforced segregation at the university level, some of which began before World War II.  She then proceeds to a host of post-World War II communities that challenged racial segregation in primary and second schools in the late 1940s and early 1950s.  The Brown decision itself, a ruling on segregated schools in Topeka, Kansas, merits only a few pages, after which she portrays the first African-American students to enter previously all-white schools during the second half of the 1950s and into the 1960s.  The pre-Brown challenges to segregated public education that Devlin highlights took place in Washington, D.C., Kansas, Delaware, Texas and Virginia. In her post-Brown analysis, she turns to the Deep South, to communities in Louisiana, Georgia and South Carolina.

Devlin’s intensely factual and personality-driven narrative at times falls victim to a forest-and-trees problem: she focuses on a multitude of individuals — the trees — to the point that the reader  can easily lose sight of the forest — how the featured individuals fit into the overall school desegregation movement.  Yet, there are a multitude of lovely trees to behold in Devlin’s forest – heroic and endearing schoolgirls and the adults who supported them, both men and women, all willing to confront entrenched racial segregation in America’s public schools.

* * *

School desegregation, Devlin writes, differed from other civil rights battles, such as desegregation of lunch counters, public transportation, and parks, in that interacting with white people was not “fleeting or ‘fortuitous,’ but central to the project itself.  School desegregation required sustained interactions with white school officials and students. This fact called for a different approach than other forms of civil rights activism” (p.xxiv).   But Devlin also emphasizes that this different approach gave rise to controversy among affected African-Americans.

In almost every community she studied, there was a dissident African-American faction that opposed desegregation of all-white schools, favoring direct pressure and court cases designed to force school authorities to make good on the “equal” portion of “separate but equal.”  Parents who favored this less frontal approach, while “willing to protest unequal schools, simply wanted a better education for their children while they were still young enough to receive it, not a long, hard campaign against a long-standing Supreme Court precedent” (p.167).  Devlin demonstrates that this quest for equalization, however understandable, was at best quixotic. Time and time again, she shows, the white power structure in the communities she studies had no serious intention of equalizing black and white schools.

Why girls and young women predominated in school desegregation efforts is as much a part of Devlin’s story as the particulars of those efforts at the institutions and in the communities she studies.  After WWII, she notes, there was a “strong, though unstated, cultural assumption that the war to end school desegregation was a girls’ war, a battle for which young women and girls were specially suited” (p.xvi).  With the example of boys and young men who had gone off to fight in World War II fresh in everyone’s minds, Devlin speculates, girls and young women may have felt an “ethical compulsion to act at a young age” (p.xvi).

Devlin was able to interview several of the female firsts for her book as they looked back on their experience in desegregating schools several decades earlier.  These women, she indicates, had been inspired as school girls “not only by a sense of obligation and individual calling but also by the opportunity to do something important and highly visible in a world and at a time when young women did not often earn much public acclaim” (p.225). The boys and young men she studied, by contrast, manifested a “desire to distance themselves from an overt, individual commitment to desegregating schools” (p.223).  Leaving was more of an option for high school age boys who felt alienated in newly desegregated schools.  They had “more mobility – and autonomy – than young women, and it allowed them to walk away from the school desegregation process when they felt it was not working for them” (p.196).   Leaving for girls “did not feel like a choice, both because they understood their parents’ expectations of them and because they had fewer alternatives” (p.196).

* * *

The pathway to Brown in Devlin’s account starts at the university level with Lucille Bluford and Ida Mae Sipuel, two lesser-known women who were denied admission because of their race to, respectively, the University of Missouri School of Journalism and the University of Oklahoma Law School.  Both saw their court cases overshadowed by those of men, Lloyd Gaines and Herman Sweatt, pursuing university level desegregation in court at the same time.  But while the two men’s cases established major Supreme Court precedents, both proved to be disappointing plaintiffs and spokesmen for the desegregation cause, in sharp contrast to Bluford and Sipuel.

Gaines was the beneficiary of one of the Supreme Court’s first major decisions involving higher education, Gaines v. Canada, where the Court ruled in 1938 that the State of Missouri was required either to admit Gaines to the University of Missouri Law School or create a separate facility for him.  Missouri chose the latter option, which Gaines refused.  But he thereafter went missing.  He was last seen taking a train to Chicago and was never heard from again.  Bluford, then a seasoned journalist working for the African-American newspaper the Kansas City Call, not only covered the Gaines litigation decision but also set out to gain admission herself to the University of Missouri’s prestigious School of Journalism.

Both “hardheaded and gregarious” (p.32), Bluford doggedly pursued admission to the university’s journalism school between 1939 and 1942.  In her court case, her lawyer, the NAACP’s Charles Houston, provided the book’s title in his closing argument when he told the court: “A girl stands at the door and a generation waits outside” (p.27).  When Bluford won a victory in court in 1942, Missouri chose to close its journalism school, citing low wartime enrollment, rather than admit Bluford.  But with her uncanny ability to find “significance in small acts of decency and mutual acknowledgement in everyday encounters” (p.11), Bluford turned her energies to reporting on school desegregation cases throughout the country, including both Sipuel’s quest to enter the University of Oklahoma Law School and the Kansas desegregation cases that led to Brown.

Sipuel agreed to challenge the University of Oklahoma Law School’s refusal to admit African-Americans only after her brother Lemuel turned down the NAACP’s request to serve as plaintiff in the case.  In 1946, she refused Oklahoma’s offer create a separate “Negro law school,” and two years later won a major Supreme Court case when the Court ruled that Oklahoma was obligated to provide her with legal education equal to that of whites.  Sipuel became the near perfect first at the law school, Devlin writes, personifying the uncommon array of skills required in that sensitive position:  “personal ambition combined with an ability to withstand public humiliation, charisma in front of the camera and self-sacrificing patience, the appearance of openness with the black and white press corps alongside an implacable determination” (p.67).

The “girl who started the fight,” as one black newspaper described Sipuel, became “something of a regional folk hero” (p.52) as a role model for future desegregation plaintiffs.  The “revelation that school desegregation was in their grasp came not from the persuasive power of NAACP officials and lawyers,” Devlin writes, but from the “‘young girl’ who would not be turned down” (p.37).  Sipuel went on to become the law school’s first African American graduate and thereafter the first African-American to pass the Oklahoma bar.

Sipuel’s engaging and exuberant public persona contrasted with that of Herman Sweatt, who sought to enter the University of Texas’s flagship law school in Austin.  In a 1950 case bearing his name, Sweatt v. Painter, the Supreme Court rejected Texas’ contention that it could satisfy the requirements of the constitution’s equal protection clause by consigning Sweatt to a “Negro law school” it had established in Houston.  The Court’s sweeping decision outlawed segregation in its entirety in graduate school education.  But although Sweatt did not go missing in action like Lloyd Gaines, he never completed his course of study at the University of Texas Law School and proved to be ill suited to the high-visibility, high-pressure role of a desegregation plaintiff.  He exuded neither Sipuel’s enthusiastic commitment to desegregated higher education, nor her grace under fire.

As the Supreme Court was rewriting the rules of university level education, dozens of cases challenging primary and secondary school segregation were percolating in jurisdictions across America, with Washington, D.C., and Meriam, Kansas, near Kansas City, providing the book’s most memorable characters.  Rigidly segregated Washington,  the nation’s capital, had several lawsuits going  simultaneously, each of which featured a strong father standing behind a courageous daughter.

First out of the gate was 14-year old Marguerite Carr.  Amidst much fanfare, in 1947 Marguerite’s father took her to enroll at a newly built white middle school two blocks from her home, where she faced off with the school principal.  When the principal told her, “you don’t want to come here,” Carr smiled, a “sign of social reciprocity, trustworthiness, a willingness to engage,” yet at the same time told the principal respectfully but firmly, “I do want to come to this school” (p.ix).  Carr’s combative response was pitch perfect, Devlin argues, meeting the “contradictory requirements inherent in such confrontations” (p.ix).

Marguerite’s court case coincided with that of Karla Galaza, a Mexican-American who had been attending  a black vocational school with a strong program in dress design until school authorities discovered that she was not black and barred her from the school.  Her stepfather, a Mexican-American activist, filed suit on his daughter’s behalf.  Simultaneously, Gardner Bishop surged into a leadership position during an African-American student strike challenging segregated education in Washington.  Bishop, by day a barber, was an activist who thrust his somewhat reluctant daughter Judine into the strike and subsequent litigation.  Bishop described himself as an outsider in Washington’s desegregation battle, representing the city’s African-American working class rather than its black middle class.  None of these cases culminated in a major court decision.

The NAACP later chose Spotswood Bolling as the lead plaintiff over a handful of girls in the lawsuit that accompanied Brown to the Supreme Court.  The young Bolling was another elusive male plaintiff, dodging all reporters and photographers.  His discomfort with the press “sets in high relief the performances of girl plaintiffs with reporters in the late 1940s (p.173),” Devlin argues.  Girls and young women “felt it was their special responsibility to find ways to address such inquiries. Bolling evidently did not” (p.174).   But the case bearing his name, Bolling v. Sharp, decided at the same time as Brown, held that segregation in Washington’s public schools was unconstitutional even though, as a federal district rather than a state, Washington was not technically bound by the constitution’s equal protection clause.

In South Park, Kansas, an unincorporated section of Merriam, located outside Kansas City, Esther Brown, arguably the book’s most unforgettable character, led a student strike over segregated schools.  Brown, a 23-year-old Jewish woman, committed radical and communist sympathizer, cast herself as merely a “housewife with a conscience” — a “deliberately humble, naïve, and conservative image” (p.108) that she invoked constantly in her dealings with public.  Lucille Bluford covered the strike for the Kansas City Call.  Bluford and the “White Mrs. Brown,” as she was called, subsequently became friends (Esther Brown was not related to Oliver Brown, the named plaintiff in the Brown case).

During the South Park student strike, Esther Brown went out on a limb to promise that she would find a way to pay the teachers herself.  She organized a Billie Holiday concert, but most of her fund raising targeted people of modest means – farmers, laborers, and domestics.  She eventually persuaded Thurgood Marshall that the NAACP should initiate a court case, despite Marshall’s initial reservations — he was suspicious of what he described as a “one woman show” (p.125).  Although the lawsuit was filed on behalf of an even number of boys and girls, Patricia Black, then eight years old, was chosen to testify in court — “setting another pattern of female participation for the cases to come” (p.111).  Black, who wore a white bow in her hair when she testified, reflected years later that she had been “taught how to act,” which meant “having manners . . . sitting up straight . . . making eye contract, being erect, and [being] nice” (p.139).

The South Park lawsuit led to the NAACP’s first major desegregation victory below the university level.  Black grade school students successfully entered the white school in the fall of 1949. The South Park case also inspired the challenge to segregated schooling in Topeka that culminated in the Brown decision.  At the trial in Brown, a 9-year-old girl, Kathy Cape, accepted the personal risk and outsized responsibility of testifying at the trial, rather than  the named plaintiff Oliver Brown, a boy.

With the Supreme Court’s ruling in Brown meriting barely more than a page, Devlin turns in the last third of the book to the schoolgirls who entered previously all white schools in the aftermath of the ruling.  Here, more than in her earlier portions, she describes in stark terms the white opposition to desegregation which, although widespread, was especially ferocious in the Deep South, where the “vast majority of school boards angrily fought school desegregation with every resource available to them” (p.192).  Devlin notes that between 1955 and 1958, southern legislatures passed nearly five hundred laws to impede implementation of Brown.

In New Orleans, three girls, Tessie Prevost, Leona Tate and Ruby Bridges, were chosen to be firsts as eight year olds at Semmes Elementary School.  Years later, Tessie described to Devlin what she, Leona and Ruby had endured at Semmes.  Administrators, teachers, and fellow pupils “did everything in their power to break us” (p.213-14), Prevost recounted.  Even teachers incited violence against the girls:

The teachers were no better that the kids. They encouraged them to fight us, to do whatever it took.  Spit on us. We couldn’t even eat in the cafeteria; they’d spit on our food – we could hardly use the restrooms  . . . They’d punch you, trip you, kick you . . . They’d push you down the steps . . . I got hit by a bat . . . in the face . . . It was every day. And the teachers encouraged it . . . Every day.  Every day (p.214).

The New Orleans girls’ experience was typical of the young firsts from the other Southern communities Devlin studied, including Baton Rouge, Louisiana, Albany, Georgia and Charleston, South Carolina.  Nearly all experienced relentless abuse, “not simply violence and aggression but a systemic, all encompassing, organized form of endless oppression” (p.214). Throughout the South, black schoolgirls demonstrated an extraordinary ability to “withstand warfare within the school when others could not,” which Devlin characterizes as a “barometer of their determination, courage, ability, and strength” (p.218).

* * *

Devlin acknowledges a growing contemporary disillusionment with the Brown decision and school integration generally among legal scholars, historians and ordinary African-Americans.  But the school desegregation firsts who met with Devlin for this book uniformly believe that their actions more than a half-century earlier had “transformed the arc of American history for the better” (p.268).   Even if Brown no longer occupies quite the exalted place it once enjoyed in the iconography of the modern Civil Rights Movement, the schoolgirls and supporting adults whom Devlin portrays in this deeply researched account deserve our full admiration and gratitude.


Thomas H. Peebles

La Châtaigneraie, France

April 8, 2020



Filed under American Society, United States History

Lenny as Paterfamilias


Jamie Bernstein, Famous Father Girl:

A Memoir of Growing Up Bernstein (Harper)


In Famous Father Girl: A Memoir of Growing Up Bernstein, Jamie Bernstein, daughter of legendary conductor, composer and overall musical genius Leonard Bernstein (1918-1990), sheds light upon how she grew up in the shadow of the legend.  In Jamie’s early years, her family looked outwardly conventional, or at least conventional for the upper crust Manhattan milieu in which she and her two siblings were raised.  Jamie, the oldest child, was born in 1952; her brother Alexander followed two years later, and their younger sister Nina was born in 1962.

Their mother Felicia Montealegre – “Mummy” throughout the memoir — was a native of Chile and a Roman Catholic from a semi-aristocratic background, a contrast to her American-born Jewish husband from a first-generation immigrant family.  Felicia was an accomplished pianist and aspiring actress, an elegant and insightful woman who was highly engaged in the lives of her children and served as the family “policeman” and “stabilizer” (p.100).   But Felicia died of cancer in 1978 at age 56.

In 1951, Felicia married Jamie’s father, most frequently referred to here as “Daddy,” but also as “Lenny,” “LB,” and “the Maestro.”  Felicia’s husband was already a world-class conductor and composer when they married, and became ever more the celebrity as the couple’s three children grew up.  Jamie’s portrait of Bernstein the father and husband conforms to what most readers passingly familiar with Bernstein would anticipate: a larger than life figure who quickly filled up any room he entered; ebullient, exuberant, and eccentric; a chain smoker, a prodigious talker as well as music maker; and a man who loved jokes,  spent much time under a sunlamp, and had a proclivity for kissing on the lips just about everyone he met, male or female.  The insights into Bernstein’s personality and how he filled the role of father and husband are one of two factors that make this memoir . . . well, memorable.

The other factor is Bernstein’s sexuality. Despite the appearances of conventional marriage and family life, the bi-sexual Maestro leaned heavily toward the gay side of the equation.  Jamie’s elaboration upon how she became aware of her father’s preference for other men, and the effect of her father’s homosexuality on her mother and the family, constitute the memoir’s backbone.  Although she provides her perspective on her father’s musical achievements, she spends more time on Bernstein as paterfamilias than Bernstein as music maker.  Jamie also reveals how she struggled to find her own pathway through life as an adolescent and young adult, feeling stalked by her family’s name and her father’s fame.

* * *

                        Jamie became aware of her father’s sexual preferences as a teenager.  She had landed a summer job at the Tanglewood Summer Music Festival in Western Massachusetts, where her father conducted.  People at Tanglewood talked freely about her father and the men he was involved with:

They talked about it quite casually in front of me, so I pretended I knew all about it – but I didn’t. I mentally reviewed past experiences; had I sensed, or observed, anything to indicate that my father was homosexual?  He was extravagantly affectionate with everyone: young and old, male and female. How could I possibly tell what any behavior meant? And anyway, weren’t homosexuals supposed to be girly? . . . Yet there was nothing I could detect that was particularly effeminate about my father. How exactly did he fit into this category?  I was bewildered and upset.  I couldn’t understand any of it – but in any case, my own existence seemed living proof that the story was not a simple one (p.123).

Thereafter, Jamie wrote her father a letter about what she had learned at Tanglewood.  When she joined her parents at their weekend house in Connecticut, her father took her outside.   He denied what he described as “rumors” that were propagated, he said, by persons who envied his professional success and hoped to jeopardize his career.  Later, Jamie wondered whether her mother had forced her father to deny everything.  After her confrontation with her father, she began to discuss her father’s sexual complexities with her siblings but never again raised the subject with either parent.

Jamie learned subsequently that prior to her parents’ marriage, Felicia had written to her future husband: “You are a homosexual and may never change . . . I am willing to accept you as you are, without being a martyr and sacrificing myself on the L.B. altar” (p.124).  Her clear-eyed mother had entered into her marriage knowing full well, Jamie concluded, that she was “marrying a tsunami – and a gay one at that” (p.172).  Her  parents may have reached an agreement, perhaps tacit, that her father would confine his philandering to the time he was one the road.  At home, he was to be very conventional.

But that agreement came to an end in in 1976, when Leonard took a separate apartment in New York to spend time with a young man, Tommy Cothren, with whom he had fallen “madly in love” (p.188).  Her father, Jamie writes, was “starting a new life – so he was cheerful, acting exuberantly gay and calling everyone ‘darling’” (p.188).  In the rift between her parents, her brother Alexander seemed to be taking Felicia’s side while Jamie worried that she was not being sufficiently supportive of her mother.  She was “trying so hard to be equitable.  I wanted my father to find his true self and be happy with who he was . . . but I couldn’t help being ambivalent over how gracelessly he was going about it, and how much pain he was inflicting on our mother . . . Sometimes I wondered if I should have been taking sides.” (p.187).

These wrenching family issues became moot two years later, when Felicia died of breast cancer. Jamie notes that her father was quite attentive to her mother as her condition worsened.  The loss of Felicia “ripped through our family’s world with a seismic shudder.   She was so adored, so deeply beautiful . . . and gone so unbearably too soon, at fifty-six” (p.218).  In the absence of Mummy, Jamie writes, her father became “as untamed as a sail flapping in a squall. The family’s preexisting behavioral boundaries were gone; now anything could happen” (p.233).  Her father’s “intense physicality and flamboyance had always been there, but now, in the absence of Felicia’s calming influence, it became a beast unleashed” (p.235).  After Felicia’s death, Leonard spent an increasing amount of time in Key West, in the Florida Keys, where the sunshine and gay intellectual culture attracted him.

Bernstein himself died in 1990, at the relatively young age of 72, from a form of lung cancer associated with asbestos exposure rather than his life-long cigarette habit (a habit which his wife shared and one which Jamie detested from an early age).  The Maestro’s final years were ones where sexual liberation combined with physical and mental decline.  He suffered from depression and “hated getting older, hated his diminishing physicality.  But the other part of the problem – and the two were inextricably intertwined – was that he was continuing to put prodigious quantities of uppers, downers, and alcohol into a body that was growing ever less efficient at metabolizing all those substances” (p.258).  His “decades of living at maximum volume appeared to be catching up with him at last” (p.316), Jamie writes.

At a concert at Tanglewood just months prior to his death, Bernstein had trouble conducting Arias and Baracollees, a piece he had written.  “[H]is brain was so oxygen-deprived by that point that he couldn’t track the complexities of his own music” (p.319).  When he came out afterwards for his bow, he was “tiny, ashen, and nearly lost inside the white suit that now hung so loosely on him, it looked as if it had been tailored for some other species” (p.319).

One shining exception to Bernstein’s downward spiral in his final years occurred at concerts in Berlin during the 1989 Christmas holiday season, the month following the fall of the Berlin wall.  Bernstein conducted a “mighty ensemble comprising players volunteering from various orchestras around the world who, along with four soloists and a local girls’ chorus, gave a pair of performances of Beethoven’s Ninth Symphony: one in East Berlin and one in West Berlin.”  And to make the performances “extra-historic,” Bernstein changed Schiller’s text in the final “Ode to Joy” movement: “now it was ‘Ode to Freedom.’ “Freiheit!’ The word rang out again and again, wreathed in Beethoven’s harmonies, and the world watched it on television on Christmas Day” (p.313).  The Berlin concerts were in Jamie’s view her father’s “peak performance,” the “pinnacle” of his lifelong advocacy for world peace and brotherhood, “never more eloquently expressed, and never to so many, than through Beethoven’s notes in that historical Christmas performance” (p.313).

But Bernstein’s progressive political orientation did not always play so well at home.  In 1970, Felicia hosted a fundraiser at their Park Avenue apartment which Leonard attended, designed to assist the families of 21 members of the Black Panther party who were in jail with inflated bail amounts, “awaiting trial for what turned out to be trumped-up accusations involving absurd bomb plots” (p.109).  The Black Panthers advocated black empowerment “by any means necessary” and were anti-Zionist, making them scary even in liberal New York.  No journalists were invited to the fundraiser, but somehow two snuck in, the New York Times society writer and an upcoming journalist, Tom Wolfe (deceased subsequent to the memoir’s publication).

An article in the Times the next day heaped scorn on the event.  “Everything about this article was loathsome,” Jamie writes, “and my parents were both aghast. But that was just the beginning” (p.112).  The Times followed a few days later with an editorial chastising the couple for mocking the memory of Martin Luther King.  The militant Jewish Defense League organized pickets in front of the Bernstein’s building and the couple became the “butt of ridicule” (p.113) in New York and nationally.  Then, weeks later, Wolfe came out with an article in New York magazine entitled “That Party at Lenny’s,” followed by Radical Chic, a book centered on the event.  “My mother’s very serious fundraiser had become her celebrity husband’s ‘party’” (p.116), Jamie writes.

Wolfe’s works had the effect of setting in stone the misinterpretation and mockery of the Panther event.  Jamie contends bitterly that Wolfe never comprehended the depth of the damage he wreaked on her family.  Unlike her father, Felicia had no work to back her up in the aftermath of the Panther debacle and grew increasingly despondent.  Four years later, she was diagnosed with cancer and underwent a mastectomy.  Four years after that, she was dead of the disease.  Even when Jamie wrote her memoir, a time when Wolfe himself was near death, “my rage and disgust can rise up in me like an old fever – and in those nearly deranged moments, it doesn’t seem like such a stretch to lay Mummy’s precipitous decline, and even demise, at the feet of Mr. Wolfe” (p.117).

Nor did Wolfe comprehend, Jamie further argues, the degree to which his “snide little piece of neo-journalism rendered him a veritable stooge for the FBI.”  Bureau Director J. Edgar Hoover “may well have shed a tear of gratitude that this callow journalist had done so much of the bureau’s work by discrediting left-wing New York Jewish liberals while simultaneously pitting them against the black activist movement –thereby disempowering both groups in a single deft stroke” (p.116).  With the Panther incident, the FBI became “obsessed with Leonard Bernstein all over again. Hoover was deeply paranoid about the Black Panthers” (p.305).  But Jamie reveals how, thanks to a Freedom of Information Act request for files on her father, the family learned that Hoover had been “obsessing on Leonard Bernstein since the 1940s, when informers started supplying insinuations that Bernstein was a Communist” (p.315).  The 800-page Bernstein file “substantially increased in girth during the Red Scare years in the 1950s, when my father had even been briefly denied a passport” (p.305).

Well before Felicia’s death, it was clear to Jamie that her father had become a “Controversial Person – a long, complex evolution from his wunderkind public persona of the 1950s” (p.296).  But in addition to her father’s story, Jamie’s memoir also provides her perspective on her own challenges “growing up Bernstein,” the memoir’s  sub-title.

* * *

                      Jamie grew up with so many of the trappings of Manhattan wealth that this portion of the story seems stereotypical, bordering on caricature.  Her family lived in fancy Manhattan apartments, eventually the famous Dakota, where John Lennon was a neighbor until he was killed in front of the building (he was killed shortly after Jamie had walked past the shooter, seemingly just one of many groupies waiting to get a glance of the singer).  The Bernstein family had a life-long South American nanny, Julia Vega, who was a major part of the family and is a presence throughout the memoir.  The three children relied primarily upon chauffeurs and limousines for local transportation. They enjoyed a secondary residence for weekend and summer getaways, first in Connecticut, then in East Hampton.  The children traveled all over the globe with their father as they grew up.  They attended elite Manhattan private schools, and all three attended Harvard, the school from which Leonard had graduated prior to World War II.  Jamie indicates that admission to Harvard brought little elation for herself or her two siblings; they always had “crippling doubts” (p.148) whether they gained admission on their merits or because they were Leonard Bernstein’s children (at Harvard, Jamie’s first year roommate was Benazir Bhutto, daughter of Pakistan’s prime minister who was assassinated when she became Pakistan’s prime minister).

As a young adult, Jamie followed her father into the music world, although her particular niche was more popular than classical music (a niche her father deeply appreciated; he too loved the Beatles). She was hardly surprised that she enjoyed considerably less success than her father. “Sure, I was musical, but I really was a very poor musician” (p.277).   She stopped fretting about comparisons to her father when she stopped trying to be a musician herself. “It turned out that if I just refrained from making music with my own body, I was much calmer . . . [M]aking music with my own body had mostly made me a mess” (p.362-63).

Jamie had her share of boyfriends as a teenager and young adult, and she manages to tell her readers quite a bit about many of them.  Her first date was with Marlon Brando’s nephew.  She smoked a lot of marijuana, experimented with a host of other mind-expanding substances, and spent a good portion of her early adulthood stoned – with her brother Alexander seemingly even more of a pothead as a young man.   She also partook of Erhard Seminars Training, aka “EST,” a “repackaging of Zen Buddhist principles for Western consumption” (p.175) and a quintessential 1970s way of “getting in touch with one’s inner feelings,” as we said back then.

Late in the memoir, a few years before her father’s death in 1990, Jamie married David Thomas, a man she had met several years earlier at Harvard.  By the end of the memoir, she has given birth to two children, a boy and a girl, and is a devoted mother — but one either separated or divorced from her husband.  She writes that her marriage had centered on David’s ability to relate to her father and fit into the family.  The thrill was gone after Leonard died.  Although the marriage “hung on for another decade,” the “deep harmony we experienced while Daddy was alive never returned” (p.337).   After the detailed run through so many boyfriends, readers will be disappointed that Jamie provides no further insight into why her marriage floundered.

Jamie found her professional niche in preserving her father’s legacy by chance, after volunteering to help her daughter’s preschool start a music program.  “It was the one and only regular music gig I ever had” (p.336), she writes.  Finding that she had a knack for bringing music to young people, a forte of her father, she devised The Bernstein Beat, a project modeled after her father’s Young People’s Concerts but focused on her father’s music.  Jamie presented The Bernstein Beat across the globe, in places as diverse as China and Cuba (in Cuba, she surprised herself by narrating in Spanish, her mother’s native tongue).  She also co-produced a documentary film, Crescendo: The Power of Music, on a program she had observed in Venezuela designed to use music as a way to reach at risk young people and keep them away from street violence.  The film, first presented at the Philadelphia Film Festival, won several prizes and Netflix bought it.

Around 2008, Jamie’s long-time friend, conductor Michael Tilson Thomas, asked her to design and present educational concerts for adults with his Miami-based orchestral academy, the New World Symphony. It turned out to be “the best job ever” for her, to the point that she felt she had become the “poster child for life beginning at fifty” (p.361).  She also began to edit a Leonard Bernstein newsletter, apprising readers of Bernstein-related performances and events.  Preserving her father’s legacy has been a “good trade-off,” she writes: “leading a musician’s life minus the music–making part” (p.362-63).

* * *

                        Jamie writes in a breezy, easy-to-read style, mixing candor – her memoir is nothing if not candid — with ample doses of humor, much of it self-deprecatory.  But without the connection to her father, Jamie’s story is mostly one of a Manhattan rich kid’s angst.  The memoir’s real interest lies in Jamie’s  insights into the character and complexity of her father.

Thomas H. Peebles

Washington, D.C.

January 25, 2020





Filed under American Society, Biography, Music

Stirring Rise and Crushing Fall of a Renaissance Man



Jeff Sparrow, No Way But This:

In Search of Paul Robeson (Scribe)

            If you are among those who think the term “Renaissance Man” seems fuzzy and even frivolous when applied to anyone born after roughly 1600, consider the case of Paul Robeson (1898-1976), a man whose talents and genius extended across an impossibly wide range of activities.  In the 1920s and 1930s, Robeson, the son of a former slave, thrilled audiences worldwide with both his singing and his acting.  In a mellifluous baritone voice, Robeson gave new vitality to African-American songs that dated to slave plantations.  On the stage, his lead role as Othello in the play of that name gave a distinctly 20th century cast to one of Shakespeare’s most enigmatic characters.  He also appeared in a handful of films in the 1930s.  Before becoming a singing and acting superstar, Robeson had been one of the outstanding athletes of his generation, on par with the legendary Jim Thorpe.  Robeson  further earned a degree from Columbia Law School and reportedly was conversant in upwards of 15 languages.

Robeson put his multiple talents to use as an advocate for racial and economic justice internationally.  He was among the minority of Americans in the 1930s who linked European Fascism and Nazism to the omnipresent racism he had confronted in America since childhood.  But Robeson’s political activism during the Cold War that followed World War II ensnared the world class Shakespearean actor in a tragedy of Shakespearean dimension, providing a painful denouement to his uplifting life story.

Although Robeson never joined a communist party, he perceived a commitment to full equality in the Soviet Union that was missing in the West.  While many Westerners later saw that their admiration for the Soviet experiment had been misplaced, Robeson never publicly criticized the Soviet Union and paid an unconscionably heavy price for his stubborn consistency during the Cold War.  The State Department refused to renew his passport, precluding him from traveling abroad for eight years.  He was hounded by the FBI and shunned professionally.  Robeson had suffered from depression throughout his adult life.  But his mental health issues intensified in the Cold War era and included a handful of suicide attempts.  Robeson spent his final years in limbo, silenced, isolated and increasingly despairing, up to his death in 1976.

In No Way But This: In Search of Paul Robeson, Jeff Sparrow, an Australian journalist, seeks to capture Robeson’s stirring rise and crushing fall.  The book’s subtitle – “In Search of Paul Robeson” — may sound like any number of biographical works, but in this case encapsulates precisely the book’s unique quality.  In nearly equal doses, Sparrow’s work consists of the major elements of Robeson’s life and Sparrow’s account of how he set about to learn the details of that life — an example of biography and memoir melding together.  Sparrow visited many of the places where Robeson lived, including Princeton, New Jersey, where he was born in 1898; Harlem in New York City; London and Wales in Great Britain; and Moscow and other locations in today’s Russia.

In each location, Sparrow was able to find knowledgeable people, such as archivists and local historians, who knew about Robeson and were able to provide helpful insights into the man’s relationship to the particular location.  We learn for instance from Sparrow’s guides how the Harlem that Robeson knew is rapidly gentrifying today and how the economy of contemporary Wales functions long after closure of the mines which Robeson once visited.  Sparrow’s travels to the former Soviet Union take him to several locations where Robeson never set foot, including Siberia, all in effort to understand the legacy of Soviet terror which Robeson refused to acknowledge.  Sparrow’s account of his travels to these diverse places and his interactions with his guides reads at times like a travelogue.  Readers looking to plunge into the vicissitudes of Robeson’s life may find these portions of the book distracting.  The more compelling portions are those that treat Robeson’s extraordinary life itself.

* * *

            That life began in Princeton, New Jersey, world famous for its university of that name.  The Robeson family lived in a small African-American community rarely visited by those whose businesses and lives depended upon the university.  Princeton was then considered,  as Sparrow puts it, a “northern outpost of the white supremacist South: a place ‘spiritually located in Dixie’” (p.29).  William Robeson, Paul’s father, was a runaway former slave who earned a degree from Lincoln University and became an ordained Presbyterian minister.  His mother Maria, who came from an abolitionist Quaker family and was of mixed ancestry, died in a house fire when Paul was six years old.  Thereafter, William raised Paul and his three older brothers and one older sister on his own.  William played a formidable role in shaping young Paul, who later described his father as the “glory of my boyhood years . . . I loved him like no one in all the world” (p.19).

William abandoned Presbyterianism for the African Methodist Episcopal Zion Church, one of the oldest black denominations in the country, and took on a much larger congregation in Somerville, New Jersey, where Paul attended high school.  One of a handful of African-American students in a sea of whites, Robeson excelled academically and played baseball, basketball and football.  He also edited the school paper, acted with the drama group, sang with the glee club, and participated in the debating society.  When his father was ill or absent, he sometimes preached at his father’s church.  Robeson’s high school accomplishments earned him a scholarship to nearby Rutgers University.

At Rutgers, Robeson again excelled academically.  He became a member of the Phi Beta Kappa honor society and was selected as class valedictorian.  As in high school, he was also an outstanding athlete, earning varsity letters in football, basketball and track.  A standout in football, Robeson was “one of the greatest American footballers of a generation,” so much so that his coach “designed Rutgers’ game-plan tactics specifically to exploit his star’s manifold talents” (p.49).  Playing in the backfield, Robeson could both run and throw. His hefty weight and size made him almost impossible to stop.  On defense, his tackling “took down opponents with emphatic finality” (p.49).  Twice named to the All-American Football Team, Robeson was not inducted into the College Football Hall of Fame until 1995, 19 years after his death.

After graduation from Rutgers in 1919, Robeson spent the next several years in New York City.  He enrolled in New York University Law School, then transferred to Columbia and moved to Harlem.  There, Robeson absorbed the weighty atmosphere the Harlem Renaissance, a flourishing of African-American culture, thinking and resistance in the 1920s.  While at Columbia, Robeson met chemistry student Eslanda Goode, known as “Essie.”  The couple married in 1921.

Robeson received his law degree from Columbia in 1923 and worked for a short time in a New York law firm.  But he left the firm abruptly when a secretary told him that she would not take dictation from an African-American.  Given his talents, one wonders what Robeson could have achieved had he continued in the legal profession.  It is not difficult to imagine Robeson the lawyer becoming the black Clarence Darrow of his age, the “attorney for the damned;” or a colleague of future Supreme Court Justice Thurgood Marshall in the 20th century’s legal battles for full African-American rights.  But Robeson gravitated instead toward singing and acting after leaving the legal profession, while briefly playing semi-pro football and basketball.

Robeson made his mark as a singer by rendering respectable African-American songs such as “Sometimes I Feel Like a Motherless Child” and “Swing Low Sweet Chariot” that had originated on the plantations — “sorrow songs” that “voiced the anguish of slavery” (p.81), as Sparrow puts it.  After acting in amateur plays, Robeson won the lead role in Eugene O’Neill’s All God’s Chillun Got Wings, a play about inter-racial sexual attraction that established Robeson as an “actor to watch” (p.69).  Many of the leading lights of the Harlem Renaissance criticized Robeson’s role in the play as reinforcing racial stereotypes, while white reviewers “blasted the play as an insult to the white race” (p.70).  An opportunity to star in O’Neill’s Emperor Jones on the London stage led the Robesons to Britain in 1925, where they lived for several years.  The couple’s  only child, Paul Jr., whom they called “Pauli,” was born in London in 1927.

Robeson delighted London audiences with his role in the musical Show Boat, which proved to be as big a hit in Drury Lane as it had been on Broadway.  He famously changed the lines to “Old Man River” from the meek “I’m tired of livin’” and “feared of dyin'” to a declaration of resistance: “I must keep fightin’/Until I’m dyin'”.  His rendition of “Old Man River,” Sparrow writes, transported the audience “beyond the silly narrative to an almost visceral experience of oppression and pain.”  Robeson used his huge frame, “bent and twisted as he staggered beneath a bale, to convey the agony of black history while revealing the tremendous strength forged by centuries of resistance” (p.103).

The Robesons in their London years prospered financially and moved easily in a high inner circle of respectable society.  The man who couldn’t rent a room in many American cities lived as an English gentleman in London, Sparrow notes.  But by the early 1930s, Robeson had learned to see respectable England as “disconcertingly similar” to the United States, “albeit with its prejudices expressed through nicely graduated hierarchies of social class.  To friends, he spoke of his dismay at how the British upper orders related to those below them” (p.131).

In London, as in New York, the “limited roles that playwrights offered to black actors left Paul with precious few opportunities to display any range. He was invariably cast as the same kind of character, and as a result even his admirers ascribed his success to instinct rather than intellect, as a demonstration not so much of theatrical mastery but of an innate African talent for make-believe, within certain narrow parameters” (p.107). Then, in 1930, Robeson received a fateful invitation to play Othello in a London production, a role that usually went to an actor of Arab background.

Robeson’s portrayal of Othello turned out triumphal, with the initial performance receiving an amazing 20 curtain calls.  In that production, which  ran for six weeks, Robeson transformed Shakespeare’s tragedy into an “affirmation of black achievement, while hinting at the rage that racism might yet engender” (p.113).  Thereafter, Othello “became central to Paul’s public persona,” (p.114), providing a role that seemed ideal for Robeson: a “valiant high-ranking figure of color, an African neither to be pitied nor ridiculed” (p.109).

While in London, Robeson developed sensitivity to the realities of colonial Africa through friendships with men such as Nnamdi Azikiwe, Jomo Kenyatta, and Kwame Nkrumah, future leaders of independence movements in Nigeria, Kenya and Ghana, respectively.  Robeson retained a keen interest in African history and politics for the remainder of his life.  But  Robeson’s commitment to political activism seems to have crystallized through his frequent visits to Wales, where he befriended striking miners and sang for them.

Robeson supported the Welsh labor movement because of the “collectivity it represented. In Wales, in the pit villages and union lodges and little chapels, he’d found solidarity” (p.149).  Robeson compared Welsh churches to the African-American churches he knew in the United States, places where a “weary and oppressed people drew succor from prayer and song” (p.133).  More than anywhere else, Robeson’s experiences in Wales made him aware of the injustices which capitalism can inflict upon those at the bottom of the economic ladder, regardless of color.  Heightened class-consciousness proved to be a powerful complement to Robeson’s acute sense of racial injustice developed through the endless humiliations encountered in his lifetime in the United States.

Robeson’s sensitivity to economic and racial injustice led him to the Soviet Union in the 1930s, which he visited many times and where he and his family lived for a short time.  But a stopover in Berlin on his initial trip to Moscow in 1934 opened Robeson’s eyes to the Nazis’ undisguised racism.  Nazism to Robeson was a “close cousin of the white supremacy prevailing in the United States,” representing a “lethal menace” to black people.  For Robeson, the suffering of African Americans in their own country was no justification for staying aloof from international politics, but rather a “reason to oppose fascism everywhere” (p.153).

With the outbreak of the Spanish Civil War in 1936, Spain became the key battleground to oppose fascism, the place where “revolution and reaction contested openly” and “Europe’s fate would be settled” (p.160).  After speaking and raising money on behalf of the Spanish Republican cause in the United States and Britain, Robeson traveled to Barcelona, where he sang frequently.  Robeson’s brief experience in Spain transformed him into a “fervent anti-fascist, committed to an international Popular Front: a global movement uniting democrats and radicals against Hitler, Mussolini, and their allies” that would also extend democracy within the United States, end colonialism abroad, and “abolish racism everywhere” (p.196-97).

Along with many progressives of the 1930s, Robeson looked to the Soviet Union to lead the global fight against racism and fascism.  Robeson once said in Moscow, “I feel like a human being for the first time since I grew up.  Here I am not a Negro but a human being” (p.198).  Robeson’s conviction that the Soviet Union was a place where  a non-racist society was possible “sustained him for the rest of his political life” (p.202).   Although he never joined a communist party, from the 1930s onward Robeson accepted most of the party’s ideas and “loyally followed its doctrinal twists and turns” (p.215).  It is easy, Sparrow indicates, to see Robeson’s enthusiasm for the Soviet Union as the “drearily familiar tale of a gullible celebrity flattered by the attentions of a dictatorship” (p.199).

Sparrow wrestles with the question of the extent to which Robeson was aware of the Stalinist terror campaigns that by the late 1930s were taking the lives of millions of innocent Soviet citizens.  He provides no definitive answer to this question, but Robeson never wavered publicly in his support for the Soviet Union.  Had he acknowledged Soviet atrocities, Sparrow writes, he would have besmirched the “vision that had inspired him and all the people like him – the conviction that a better society was an immediate possibility” (p.264).

Robeson devoted himself to the Allied cause when the United States and the Soviet Union found themselves on the same side fighting Nazi aggression during World War II, “doing whatever he could to help the American government win what he considered an anti-fascist crusade” (p.190).  His passion for Soviet Russia “suddenly seemed patriotic rather than subversive” (p.196-97).  But that quickly changed during the intense anti-Soviet Cold War that followed the defeat of Nazi Germany.  Almost overnight in the United States, communist party members and their sympathizers became associated “not only with a radical political agenda but also with a hostile state.  An accusation of communist sympathies thus implied disloyalty – and possibly treason and espionage” (p.215).

The FBI, which had been monitoring Robeson for years, intensified its scrutiny in 1948.   It warned concert organizers and venue owners not to allow Robeson to perform “communist songs.”  If a planned tour went ahead, Sparrow writes, proprietors were told that they would be:

judged Red sympathizers themselves. The same operation was conducted in all the art forms in which Paul excelled.  All at once, Paul could no longer record music, and the radio would not play his songs.  Cinemas would not screen his movies. The film industry had already recognized that Paul was too dangerous; major theatres arrived at the same conclusion. The mere rumor that an opera company was thinking about casting him led to cries for a boycott.  With remarkable speed, Paul’s career within the country of his birth came to an end (p.216).

In 1950, the US State Department revoked Robeson’s passport after he declined to sign an affidavit denying membership in the Communist Party.  When Robeson testified before the House Un-American Affairs Committee (HUAC) in 1956, a Committee member asked Robeson why he didn’t go back to the Soviet Union if he liked it so much.  Roberson replied: “Because my father was a slave . . . and my people died to build this country, and I am going to stay here, and have a part of it just like you.  And no fascist-minded people will drive me from it. Is that clear?” (p.228). Needless to say, this was not what Committee members wanted to hear, and Robeson’s remarks “brought the moral weight of the African-American struggle crashing down upon the session” (p.228-29).

Robeson was forced to stay on the sidelines in early 1956 when the leadership of the fledgling Montgomery bus boycott movement (which included a young Dr. Martin Luther King, Jr.) concluded that his presence would undermine the movement’s fragile political credibility.  On the other side of the Cold War divide, Soviet leader Nikita Khrushchev delivered a not-so-secret speech that winter to party loyalists in which he denounced Stalinist purges.   Sparrow hints but doesn’t quite say that Robeson’s exclusion from the bus boycott and Khrushchev’s acknowledgment of the crimes committed in the name of the USSR had a deleterious effect on Robeson’s internal well-being.   He had suffered from bouts of mental depression throughout his adult life, most notably when a love affair with an English actress in the 1930s ended badly (one of several Robeson extra-marital affairs). But his mental health deteriorated during the 1950s, with “periods of mania alternating with debilitating lassitude” (p.225).

Even after Robeson’s passport was restored in 1958 as a result of a Supreme Court decision, he never fully regained his former zest.  A broken man, he spent his final decade nearly invisible, living in his sister’s care before dying of a stroke in 1976.

* * *

                     Sparrow describes his book as something other than a conventional biography, more of a “ghost story” in which particular associations in the places he visited form an “eerie bridge” (p.5) between Robeson’s time and our own.  But his travels to the places where Robeson once lived and his interactions with his local guides have the effect of obscuring the full majesty and tragedy of Robeson’s life.  With too much attention given to Sparrow’s search for what remains of Robeson’s legacy on our side of the bridge, Sparrow’s part biography, part travel memoir comes up short in helping readers discover Robeson himself on the other side.



Thomas H. Peebles

Paris, France

October 21, 2019




Filed under American Society, Biography, European History, History, Politics, United States History