Category Archives: United States History

100% American?

 

Linda Gordon, The Second Coming of the KKK:

The Ku Klux Klan of the 1920s and the American Political Tradition (Livernight Publishing)

            The Ku Klux Klan, today a symbol of American bigotry, intolerance, and domestic terrorism at its most primitive, had three distinct iterations in United States history.  The original Klan arose in the American South in the late 1860s, in the aftermath of the American Civil War; it was a secret society that utilized intimidation, violence, assassination and other forms of terror to reestablish white supremacy and thwart efforts of recently freed African-American slaves to exercise basic rights.  This iteration of the Klan faded during the following decade, but not before helping to cement the regime of rigid racial segregation that prevailed in the American South for the remainder of the century and beyond.  Then, in the 1950s and 1960s, the Klan resurfaced in the South, again as an organization relying upon violence and intimidation to perpetuate white supremacy and rigid racial segregation, this time in the face of the burgeoning Civil Rights movement of the era. 

         The Ku Klux Klan, today a symbol of American bigotry, intolerance, and domestic terrorism at its most primitive, had three distinct iterations in United States history.  The original Klan arose in the American South in the late 1860s, in the aftermath of the American Civil War; it was a secret society that utilized intimidation, violence, assassination and other forms of terror to reestablish white supremacy and thwart efforts of recently freed African-American slaves to exercise basic rights.  This iteration of the Klan faded during the following decade, but not before helping to cement the regime of rigid racial segregation that prevailed in the American South for the remainder of the century and beyond.  Then, in the 1950s and 1960s, the Klan resurfaced in the South, again as an organization relying upon violence and intimidation to perpetuate white supremacy and rigid racial segregation, this time in the face of the burgeoning Civil Rights movement of the era. 

          In between was the Klan’s second iteration, emerging in the post-World War I 1920s and the subject of Linda Gordon’s The Second Coming of the KKK: The Ku Klux Klan of the 1920s and the American Political Tradition.  Gordon, a prominent American feminist and historian, portrays the 1920s Klan as significantly more complex than its first and third iterations.  Although bigotry and intolerance were still at the heart of the 1920s Klan, it directed its animosity not only at African-Americans but also at Catholics, Jews, and immigrants.  Gordon considers the second Klan to be a reaction to the supposed licentiousness of the “Roaring Twenties” and the rapidly changing social mores of the decade.   With a central mission of purging the country of elements deemed insufficiently “American,” the Klan in the 1920s sought to preserve or restore white Protestant control of American society, which it saw slipping away.

            As the reference to the “American Political Tradition” in the sub-title suggests, much of Gordon’s interpretation consists of an elaboration upon how six distinct American “traditions” came together to give rise to the Klan’s rebirth after World War I: racism, nativism, temperance, fraternalism, Christian evangelicalism, and populism.  She also includes a final section on how, despite ostensible similarities, the Klan differed from the European fascism that came to power in Italy and was bubbling in Germany in the same time frame.  Although it shared with fascist Italy and Nazi Germany a vision for the future based on “racialized nationalism,” the Klan’s nationalism melded racism and ethnic bigotry with evangelical Protestant morality.  The second Klan thus turned its enemies into sinners in a manner that set it apart not only from European fascism but also from the first and third Klan iterations.

            The 1920s Klan was anything but a secretive organization.  It elected hundreds of its members to public office, controlled newspapers and magazines, and boasted of six million members nationally.  It was a fraternal organization with innovative recruitment methods and a decentralized organizational structure, only marginally different from the Rotarians and the Masons.  Whereas the Klan in its first and third iterations was a distinctly southern organization, the 1920s Klan flourished in northern and western states as well as the American South; it was particularly strong in Indiana and Oregon. 

            In Gordon’s interpretation, the Klan in the 1920s further differentiated itself from its first and third iterations by engaging only rarely in what she terms “vigilantism” — overt intimidation and violence.  Readers expecting a gruesome recitation of middle-of-the-night lynchings, the Klan’s trademark form of domestic terrorism, are likely to be disappointed by this volume.  She rarely mentions the term “lynching.”  The primary incident of overt intimidation she highlights is one already familiar to many readers: the Klan’s nighttime assault in 1925 on the Omaha, Nebraska, house of the family of Malcolm Little, later known as Malcolm X.  Klansmen on horseback surrounded the Little house, shattered the windows and forced the family to flee Omaha.  The assault, Gordon indicates, was “typical of the northern Klan’s vigilantism – usually stopping short of murder or physical assault, but nevertheless communicating a credible threat of violence to Klan enemies.  The vast majority of Klanspeople never participated in this vigilantism” (p.94).  

            But what about vigilantism in the South?  Gordon hints at several points that murder and physical violence may have been more extensive in southern states than in the North and West (e.g., vigilantism was the Klan’s “core function” in the South, whereas Klan organizations in the North and West “rarely” engaged in violent attacks; p.206).  But she barely treats the American South, focusing almost exclusively on northern and western states, thereby leaving readers with the sense that they may not have received a full account of the vigilantism of the 1920s Ku Klux Klan, and that a book which delved more deeply into the 1920s Klan in southern states might have been altogether different from this account.

            At least in northern and western states, Gordon argues, the Klan’s views were not out of step with most white American Protestants, the majority group in the United States in the 1920s.  “Never an aberration” in its prejudices, the second iteration of the Klan was, “just as it claimed, ‘100% American’” (p.36).  But in enunciating values with which a majority of white American Protestants of the 1920s probably agreed, the Klan:

whirled these ideas into greater intensity.  The Klan argued that the nation itself was threatened.  Then it declared itself a band of warriors determined to thwart that threat.  In the military metaphors that filled Klan rhetoric, it had been directed by God – a Protestant God, of course – to lead an army of right-minded people to defeat the nation’s internal enemies (p.36). 

* * *

            Antagonism to diversity, a “form of pollution, uncleanliness,” is key to understanding the Klan in the 1920s.  “Fear of heterogeneity” underlay its “extreme nationalism and isolationism; Klanspeople saw little to admire in any foreign culture” (p.58).  The Klan viewed Catholics as threats because their religion was global, making Catholics subservient to Rome and disloyal to America  —  “underground warriors for their foreign masters” (p.45).  The Klan charged Catholics with what amounts to “unfair competition,” alleging that emissaries of the Pope in Rome had helped Catholics “take over police forces, newspapers, and big-city governments” (p.203). 

            Jews were guilty of a different kind of foreign allegiance, to a “secular international cabal of financiers who planned to take over the American economy through its financial institutions” and establish a “government within our government” (p.49).  Jews did not produce anything; they were mere financial middlemen who contributed no economic value to the United States.  The Klan blamed the Jews for the decline in morality, for women’s immodest dress, and for the debasement of the culture coming from Hollywood.   But, “in one remarkable silence about the Jews,”  Klan discourse “did not often employ the reverse side of classic anti-Semitism: that these dishonest merchant capitalists were also Communists” (p.49).  

            Among immigrants, the Klan targeted in particular Mexicans, Japanese, Chinese and East Asians, along with Southern and Eastern Europeans (which of course included many Catholics and Jews).  Exempted were what it termed “Nordic” immigrants, generally Protestants from Germany, the Scandinavian countries and the British Isles.  The Klan argued “not only for an end to the immigration of non-‘Nordics’ but also for deporting those already here.  The date of their immigration, their longevity in the United States, mattered not” (p.27).  No matter how long such immigrants remained in the country, they could never become fully American.

            With rites based on Bible readings and prayer, the second Klan’s religiosity “might suggest that it functioned as a Protestant denomination.”  But the Klan was “not a denomination,” Gordon writes.  It sought to “incorporate existing Protestant churches, not replace them, and to put evangelism at their core.  It was in many ways a pan-Protestant evangelical movement, that is, an attempt to unite evangelical Protestants across their separate denominations” (p.88).  The Klan relied heavily upon evangelical ministers for recruitment, a mobilization that “foreshadowed – and probably helped generate – the entry of Christian Right preachers into conservative politics fifty years later” (p.90).  The 1920s “may have been the first time that bigotry became a major theme among [evangelical Protestant] preachers” (p.91).   

            The Klan joined enthusiastically with evangelical Protestants to support Prohibition, the anti-alcohol movement that succeeded in enshrining temperance into the American constitution in the form of the constitution’s 18th amendment.  For a full 14 years, from 1919 to 1933, the Klan theoretically had constitutional sanction for its vision of a world without alcoholic beverages.  Defense of Prohibition was universal among the Klan’s diverse chapters, and in Gordon’s view was “arguably responsible for the fact that many relatively tolerant citizens shrugged off its racist rhetoric” (p.95).  Supporting Prohibition, the Klan blamed its enemies for violations.  In Klannish imagination, “Catholics did the drinking and Jewish bootleggers supplied them” (p.58).

            The Klan also joined with many women’s groups in supporting Prohibition.  Klanswomen formed a parallel organization, Women of the Ku Klux Klan (WKKK), which Gordon finds close in outlook and approach to the Women’s Christian Temperance Union, one of the major groups backing the 18th amendment.  The WKKK supported woman’s suffrage – for white, Protestant women.  Klanswomen also supported women’s employment and even called for women’s economic independence.  Although outnumbered about 6 to 1 in the Klan, women contributed a new argument to the cause: that women’s emergence as active citizens would help purify the country, bringing “family values” back into the nation’s governance.  Women engaged in charitable work on behalf of the Klan, raised money for orphanages, schools and individual needy families, and placed Protestant bibles in the schools.  Women often led youth divisions of the Klan.  Without women’s long hours invested in Klan activties, Gordon argues, the second Klan “could not have become such a mass movement” (p.129). 

            But, in an organization based on male hierarchy which played specifically to white Protestant males’ anxiety over loss of privileged status in the new and unsettling post-World War I years, many women rose to national prominence as leaders of the Klan’s second coming.  Perhaps the most striking characteristic of such women was their “entrepreneurship,” which involved “both ambition and skill, both principle and profit . . . Experienced at organizing large events, state-of-the-art in managing money, unafraid to attract publicity, they were thoroughly modern women” (p.122-23).  Gordon seems unsure how to present these strong, assertive women who freely embraced the Klan creed of bigotry and intolerance.   The Klanswomen’s activism “requires a more capacious understanding of feminism,” she writes.  Their “combination of feminism and bigotry may be disturbing to today’s feminists, but it is important to feminism’s history.  There is nothing about a generic commitment to sex equality that inevitably includes a commitment to equalities across racial, ethnic, religious or class lines” (p.123).  At another point, she admonishes readers to “rid themselves of notions that women’s politics are always kinder, gentler, and less racist than men’s” (p.110).

            In its economic values, the Klan was wholly conservative.  It was devoted to the business ethic and revered men of great wealth, with its economic complaints invariably taking the form of “racial and religious prejudices”  (p.203).  The Klan sought to implement its vision of a white Protestant America “without fundamental changes to the political rules of American democracy.  The KKK was a political machine and a social movement, not an insurrectionary vanguard” (p.208).   What made the Ku Klux Klan so successful in the early 1920s was an aggressive, state-of-the-art approach to recruiting:

Far from rejecting commercialization and the technology it brought, such as radio, the Klan’s system was entirely up-to-date, even pioneering, in its methods of selling.  From its start, the second Klan used what might be called the social media of its time.  These methods – a professional PR firm, financial incentives to recruit, advertisements in the mass media, and high-tech spectacular pageants – produced phenomenal growth for several years (p.63).

            The Klan in its second iteration faded quickly, beginning around 1925.  By 1927 Klan membership had shrunk to about 350,000.  Several highly publicized scandals and cases of criminal embezzlement among Klan leaders, exposing its leaders’ crimes, hypocrisy, and misbehavior, induced the Klan’s precipitous fall in the latter portion of the 1920s, along with the “profiteering” of Klan leaders — “gouging members through dues and the sale of Klan paraphernalia” (p.191).  Power struggles among leaders produced splits and even rival Klans under different names.  Rank-and-file resentment transformed the Klan’s already high turnover into “mass shrinkage as millions of members either failed to pay dues or formally withdrew” (p.191). 

            But the longest-term force behind the Klan’s decline may have been the increasing integration of Catholics and Jews into American society.  The “allegedly inassimilable Jews assimilated and influenced the culture, both high-brow and low-brow.  The alleged vassals of the pope began to behave like other immigrants, firm in their allegiance to America” (p.197).   By contrast, the Klan “never gave up its hatred for people of color.  As African-Americans moved northward and westward, as more Latin American and East Asian immigrants arrived, the latter-day Klan shifted toward a simpler, purer racial system, with two categories: white and not white” (p.197-98).

* * *

            Despite its precipitous decline, the Ku Klux Klan in its second iteration triumphed in many respects.  The biggest tangible Klan victory was in legislation restricting immigration.  Although the Klan was not solely responsible, its propaganda “surely strengthened racialized anti-immigrant sentiment both in Congress and among the voters” (p.195).  Less tangibly, the Klan “influenced the public conversation, the universe of tolerable discourse” (p.195).  The second Klan “spread, strengthened, and radicalized preexisting nativist and racist sentiments among the white population.  In reactivating these older animosities it also re-legitimated them.  However reprehensible hidden bigotry might be, making its open expression acceptable has significant additional impact” (p.195).   In this sense, Gordon’s compact and captivating interpretation serves as a reminder that the Klan remains a presence still to be reckoned with today, nearly a century after its second coming. 

Thomas H. Peebles

Bordeaux, France

January 28, 2019

Advertisements

6 Comments

Filed under United States History

A Tale of Three Cities’ Spaces and Places

Mike Rapport, The Unruly City:

Paris, London and New York in the Age of Revolution

In The Unruly City: Paris, London and New York in the Age of Revolution, Mike Rapport, professor of modern European history at Scotland’s University of Glasgow, provides a novel look at three urban centers in the last quarter of the 18th century: Paris, London and New York.  As the title indicates, the century’s last quarter was the age of revolution: in America at the beginning of the approximate 25-year period, as the 13 American colonies fought for their independence from Great Britain and became the United States of America; followed by the French Revolution in the next decade, which ended monarchial rule, abolished most privileges of the aristocracy and clergy, and uprooted deep-rooted social and cultural norms.  Great Britain somehow avoided any such an upheaval during this time, and that is one of the main points of the story. 

But radical democratic movements were afoot in all three countries, favoring greater equality, a drastically expanded franchise and opposition to entrenched privilege  – objectives overlapping with but not identical to those of the revolutions in America and France.  How these democratic impulses played out in each city is the real core of Rapport’s story — or, more precisely, how these impulses played out in each city’s spaces and places.  In examining the contribution of each city’s topography – its spaces and places — to political outcomes, Rapport utilizes a “bottom up” approach which emphasizes the roles played by each city’s artisans, small shopkeepers, and everyday working people as they struggled against entrenched elites.  Rapport thus brings the perspective of an urban geographer and demographer to his story.  But there is also a geo-political angle that needs to be factored into the story. 

The French and Indian War, also known as the Seven Years War, in which France and its archenemy Britain vied between 1756 and 1763 for control of large swaths of the American continent, ended in ignominious defeat for France.  But both Britain and France emerged from the war with staggeringly high debts, triggering financial crises in both countries.  A decade and a half later, in 1777, monarchial France lent assistance to the American colonies as they broke away from Britain.  The newly formed United States of America in turn largely supported the French Revolution when it broke out in 1789, and sided with revolutionary France when it found itself again at war with Britain in 1792.  Rapport’s topographical approach, with its concentration on the cityscapes of Paris, London and New York, provides a fresh perspective to these familiar late 18th century events.

In the final quarter of the 18th century, Paris and London were sprawling nerve centers of venerable, centuries-old civilizations, while New York was far smaller, far younger, and not quite the nerve center of an emerging New World civilization.  In 1790, moreover, in the middle of Rapport’s story, New York lost its short-lived position as the political capital of the newly created United States of America.  But Paris was different from both New York and London in ways that are consequential for this multi-layered, complex and ambitious tale of three cities. 

Although France’s revolution was nation-wide, its course was dictated by events in Paris in a manner altogether different from the way the American Revolution unfolded in New York.  France in the last quarter of the 18th century lived under a monarchy described alternatively as “despotic” and “absolute.”  It benefitted from nothing quite comparable to America and Britain’s shared heritage from England’s 1688 “Glorious Revolution,” which established critical individual rights and checks upon monarchial power, all of which were “jealously defended by British subjects on both sides of the Atlantic and enviously coveted by educated, progressive Frenchmen and -women” (p.xv). Democratic radicalism in France thus had an altogether different starting point from that in America or Britain, one of the reasons radicalism fused with revolutionary fervor in France in a way it never did in either America or Britain.  These divergences between France on the one hand and America and Britain on the other help explain why Rapport’s emphasis on urban spaces and places serving political ends works best in Paris.   

Rapport resolutely links phases of the French Revolution to discrete Parisian spaces and places: giving impetus to the revolution’s early stages were the Palais-Royal, a formerly aristocratic enclave on the Right Bank, and the artisanal district of the Faubourg St. Antoine, located just east of the hulking Bastille fortress; Paris’ central market, Les Halles, and the Cordeliers district, centered around today’s Placed de l’Odéon on the Left Bank, sustained the revolution’s more radical stages.  The distinct character of these sections of Paris, Rapport writes, goes “a long way to explain how the events unfolded and where much of the revolutionary impulse came from.”  Their geographical and social makeup made Paris a “truly revolutionary city, with a popular militancy that kept politics on the boil with each new crisis.  This combination of geography, social structure and political activism distinguished the Parisian experience from that of London and New York” (p.202). 

When he moves from revolutionary Paris to New York and London, Rapport’s urban topographical approach seems comparatively flat and somewhat forced.  He shows how New York’s Common, located near the city’s northern limits in today’s lower Manhattan, became the focal point for the city’s’ rising democratic fervor and its resistance to British rule.  In London, he focuses upon St. George’s Field, functionally similar to New York’s Common as a location where large groups from all walks of life and all parts of the metropolis gathered freely.  St. George’s Field, which today encompasses Waterloo Station, became the center of mass demonstrations in support of democratic radical John Wilkes, who was jailed for seditious libel in a prison overlooking this largely undeveloped, semi-rural expanse.   But the most compelling story for New York and London is how the democratic energy in the two cities stopped short of the thorough social and cultural uprooting of the French Revolution, much to the relief of elites in both cities.     

* * *

By the fateful year of 1789, Paris’ Palais-Royal, then an “elegant complex of colonnades, arcades, gardens, fountains, apartments, theatres, offices and boutiques” (p.127), had become a combative pubic gathering place where journalists and orators “intellectually pummeled, ideologically bludgeoned and rhetorically battered the old order” (p.125).  Questions involving royal despotism and the rights of citizens were debated and discussed across Paris and throughout France, but “nowhere did these great questions generate more white hot fervor than in the Palais-Royal”(p.127).  The Palais-Royal gave political voice to the insurrection against the monarchy and inherited privilege that broke out in Paris in the spring of 1789 and spread nation-wide.  Without the “contentious cauldron” of the Palais-Royal, Rapport concludes, it is “hard to imagine the insurrection unfolding as it did – and even having the revolutionary results that it did” (p.145),

The Faubourg St. Antoine contributed “special vigor” (p.126) to the 1789 uprising, which resulted in a transfer of power from the King to an elected chamber, the National Assembly, and the subsequent July 1789 assault on the Bastille. An artisanal district famous for its furniture and cabinet makers, Faubourg St. Antoine’s topography and location, Rapport writes, made the neighborhood “especially militant” (p.137) because it was conscious of being outside the old limits of the city.  There was nothing in either New York or London to match the Faubourg’s “geographical cohesion, its homogeneity, its separateness and its defensiveness” (p.137). In Faubourg St. Antoine, a political uprising became a social and cultural upheaval as well.  As “bricks and mortar places,” Rapport writes, both the Palais-Royal and the Faubourg St. Antoine had a “material impact on the shape and outcome of events” and played outsized roles in marking the “final crisis of the old order” (p.126),

As the revolution became more radical, the central market of Les Halles, “the belly of Paris,” also played an outsized role.  Les Halles was the largest and most popular of several Parisian markets.  Its particular culture and geographic location gave Les Halles a “revolutionary dynamism” (p.177) that bound together those who lived and worked there, especially women.  A coordinated women’s march, fueled by food shortages throughout Paris, emanated from Faubourg St. Antoine and Les Halles in October 1789.  The march ended in Versailles, where the women invaded the National Assembly and gained an audience with King Louis XVI.  The King agreed to give his royal sanction to a series of revolutionary demands and, more to the point, promised that Paris would be supplied with bread.  Later the same day, the women forced the King and his family to return to Paris, where they lived as virtual hostages in a city whose women had “demonstrated their determination to keep the Revolution on track” (p.183).

In the aftermath of the march, the National Assembly, instilled by fear of the “unpredictable, uncontrollable force of popular insurrection” (p.185-86), restricted the vote to “active” citizens, adult males who paid a set level of taxes, only about one-half of France’s male population.  The subsequent move to expand the franchise in 1789-90 originated in the Cordeliers district, an “effervescent combination of an already articulate, politicized artisanal population, combined with the concentration of a sympathetic radical leadership” (p.188).  After Lucille and Camille Desmoulins, husband-and-wife journalists from the district, wrote an important article in which they attacked the restriction of the franchise – “What is this much repeated word active citizen supposed to mean?  The active citizens are the ones who took the Bastille” (p.190) – the Cordelier district assembly in June 1790 proposed that all males who paid “any tax whatsoever, including indirect taxes, which included just about everybody, should have ‘active’ citizenship” (p.188-89; notwithstanding the thorough uprooting of the French Revolution, there was no move to extend the franchise to women).   

The Cordeliers district narrowed the political divide between social classes in no small part because of the Society of the Friends of the Rights of Man and the Citizen, founded in the heart of the district.  Made up of merchants, artisans, tradesmen, retailers and radical lawyers, the Society also encouraged women to attend its sessions.  It saw its primary purpose as “rooting out the threats to the Revolution” and “challenging the limits placed on political rights by the emerging constitutional order” (p.191).  Its influence “rested in its distinctly metropolitan reach” and in having its roots in a neighborhood whose “social and political character made it a linchpin binding the axle of middle-class radicalism to the wheels of popular revolutionary activism” (p.195-96).  As the revolution entered its most radical phases, the Cordeliers district proved to be “one of the epicenters of the metropolitan outburst,” unlike any other district in Paris, bridging the “social gap between the radical middle-class leadership of the burgeoning democratic movement and the militants of the city’s working population” (p.195).    

            No specific Parisian neighborhoods are linked to the turn that the Revolution took in 1793-94 known as the Terror, “synonymous with the ghastly mechanics of the guillotine” (p.223).  This phase occurred at a time of multiple crises, when the newly declared French Republic grasped at repressive and draconian means to defend itself.  Driven by the  “blunt, direct and violent”  (p.226-27) radicals who called themselves sans-culottes (literally, those “without breeches”), the Terror was the period that saw King Louis XIV and Marie Antoinette executed, followed by a chilling string of prominent figures deemed “enemies of the revolution” (among them prior revolutionary leaders Maximilian Robespierre and Georges-Jacques Danton, along with Cordelier journalist Camille Desmoulins).  Rapport’s chilling chapter on this phase serves as a reminder of the perils of excessive revolutionary zeal.

Throughout the Revolution, all sections of Paris felt its physical effects in the adaptation of buildings for the multitude of institutions of the new civic order. The process of taking over buildings in every quarter of Paris  — churches, offices, barracks and mansions — not only “made the Revolution more visible, indeed more intrusive, than ever before, but also represented the physical advance of the revolutionary organs deeper in the neighborhoods and communities of the capital” (p.226).  The “physical transformation of interiors, the adaptation of internal spaces and the embellishment of the buildings with revolutionary symbols, reflected the radicalism of the French Revolution in constructing an egalitarian order in an environment that had grown organically out of corporate society based on privilege and royal absolutism” (p.310).  In New York, the physical transformation of the city was not so thoroughgoing, “since the American Revolution did not constitute quite such a break with the past” (p.171).     

* * *

New York in the late 18th century was already an important business center, the major gateway into the New World for trade and commerce from abroad, with a handful of powerful, well-connected families dominating the city’s politics.  Although its population was a modest 30,000, diminutive in comparison to London and Paris, it was among the world’s most heterogeneous cites.  In its revolutionary years, New York witnessed what Rapport terms a “dual revolution,” both a “broad coalition of colonists against British rule” and a “revolt of the people against the elites,” which blended “imperial, local and popular politics in an explosive mix” (p.2). The contest between the “people of property” and the “mob” was about the “future forms of government, whether it should be founded upon aristocratic or democratic principles” (p. p.28-29), in the words of a future New York Senator.

The tumultuous period that ended with independence in 1783 began when Britain sought to raise money to pay for the Seven Years War through the Stamp Act of 1765, which imposed a duty on all legal documents (e.g., deeds, wills, licenses, contracts), the first direct tax Britain had imposed on its American colonies.  Triggered by resistance to the Stamp Act, the dual American revolution in the years leading up to war between the colonies and Britain moved in New York from sites controlled by the city’s elites, especially the debating chambers of City Hall, to sites more accessible to the public, in particular the open space known as the Common, along with the city’s taverns and the streets themselves. 

More than just a public space, the Common was “also a site where the power of the state, in all its ominous brutality, was on display” (p.18).  Barracks to house British troops had been erected on the Common during the Seven Years War, and it was the site of public executions.  It was on the Commons that the Liberty Pole, the mast of a pine ship, was erected and became the city’s most conspicuous symbol of resistance, a “deliberate, defiant counterpoise” (p.18) to British state authority.  The first Liberty Pole was hacked down in August 1766, only to be replaced in the following days.  This pattern repeated itself several times, as the Common became the most politically charged place in New York, where a more militant, popular form of politics emerged to challenge the ruling elites.

  It was on the Common, at the foot of the Liberty Pole that New Yorkers received the news in April 1775 that war with the British had broken out in New England.  In 1776, George Washington announced the promulgation of the Declaration of Independence on the same site. During the war for independence, the Liberty Pole became the symbolic site where people declared their support for independence – or, in many cases, were compelled to do so.     

In 1789, after the American colonies had won independence from Britain, the Common served as the start and end point of a massive parade through New York City in support of a proposed constitution to govern the country now known as the United States of America, at a time when the entire State of New York was wrestling with the decision whether to become the last state to ratify the proposed constitution.  The choice of the Common as the parade’s start and end point was, Rapport writes, highly symbolic, “connecting the struggle for the Constitution with the earlier battles around the Liberty Pole” (p.162).  Dominated by the city’s tradesmen and craft workers, the parade was a “tour of artisanal force” that “connected the Constitution with the commercial prosperity upon which the city and its working people depended,” serving as a reminder to the city’s elites that the revolution had “not just secured independence, but [had also] mobilized and empowered the people”(p.163).

The parade from the Common through New York’s streets also demonstrated the degree to which democratic radicalism in New York had been tempered.  The city’s radicals, aware that New York’s prosperity depended upon good commercial relations and a thriving mercantile community, “reached beyond mere vengeance and aimed at forging a more equal democracy, in which the overmighty power of the wealthy and the privileged would be cut down to size, allowing artisans and ‘mechanics’ to enjoy the democratic freedoms that they had done so much to secure” (p.156). 

With their vested interest in the financial and commercial prosperity of the city, New York’s radicals were not yet ready to call for “leveling,” or “social equality,” among the greatest concerns to the city’s privileged classes.  In London, too, democratic radicalism stopped short of a full-scale challenge to the social order. 

* * *

While Britain was attempting to rein in America’s rebellious colonies, a movement for democratic reform emerged in London, centered on parliamentary reform and expansion of the suffrage.  The movement’s unlikely leader was journalist and parliamentarian John Wilkes, who symbolized “defiance towards the elites and the overbearing authority of the eighteenth-century British state” (p.35).  The liberties that Wilkes defended began with those specific to the City, a small and nearly autonomous enclave within metropolitan London.  Known today as London’s financial district, the City in the latter half of the 18th century was a “lively hub of activity of all kinds, not just finance but also highly skilled artisans, printers and merchants plying their trades” (p.37).  It had its own police constables and enjoyed privileges unavailable elsewhere in London, including direct access to King and parliament. 

Wilkes, writing “inflammatory satire,” excoriated the government and campaigned for an expansion of voting rights with a mixture of “irony, humor and vitriol” (p.42).  Wilkes tied his in-your-face radicalism to a defense of the traditional liberties and power of the City.  But his radicalism caused him to be expelled from the House of Commons, then tried and convicted of seditious libel.  For London’s working people, Wilkes became “another victim of a harsh, unforgiving system that seemed staked in favor of the elites” (p.51).  Wilkes was jailed in a prison that overlooked St. George’s Fields, London’s undeveloped, semi-rural gathering point on the opposite side of the River Thames from the City.  St. George’s Fields came to represent symbolically a “departure from the narrow defense of the City’s privileges towards a broader demand for a national politics more responsive to the aspirations of the people at large” (p.44).   

When authorities failed to release Wilkes on an anticipated date in 1768, a major riot broke out in St. George’s Fields in which seven people were killed.  Mobilization on St. George’s Fields on behalf of Wilkes, Rapport writes,  “brought thousands of London’s working population into politics for the first time, people who had little or no stake in the traditional liberties of the City, let alone a vote in parliamentary elections, but who saw in Wilkes’s defiance of authority a mirror of their own daily struggle for self-respect and dignity in the face of the overbearing power of the state and the social dominance of the elites” (p.44).

Once freed, Wilkes went on to be elected Lord Mayor of the City in 1774 and chosen also to represent suburban Middlesex in Parliament.  Two years later he was pushing the altogether radical notion of universal male suffrage. But, rather than attacking the privileges of the City, the movement in support of Wilkes fused with a defense of the City.  This fusion, in Rapport’s view, “may be one reason the resistance to authority in London, though certainly riotous, did not become revolutionary . . . Londoners were able to make their protests without challenging the wider structure of politics” (p.52-53).   By coalescing around the figure of John Wilkes, the popular mobilization “reinforced rather than challenged the privileges that empowered the City to resist the king and Parliament” (p.56).

As revolution raged on the other side of the English Channel after 1789, many in London believed that that Britain’s 1688 revolution “had already secured many basic rights and freedoms for British subjects; the French were starting from zero” (p.257).  Arguments about the French revolution and criticisms and defense of the British constitution were kept within legal boundaries in London.  It was the British habit of free discussion, Rapport concludes, “alongside, first, the commitment to legality among the reformers and, second, the relative caution with which . . . the government proceeded against them that ensured that London avoided a revolutionary upheaval in these years” (p.221).

* * *

Rapport sets a dauntingly intricate task for himself in seeking to demonstrate how the artisanal and working class populations of Paris, New York and London used each city’s spaces and places to abet radical democratic ideas.   How those spaces and places helped shape revolutionary events in Paris from 1789 onward and thereby transformed the city are the best portions of his work, insightful and at times riveting.  His treatment of New York and London, where no such physical transformation occurred, has less zest.  But the tale of three cities comes together through Rapport’s detailing of moments in each place when “thousands of people, often for the first time, seized the initiative and tried to shape their own political futures” (p.317).

* * *

Thomas H. Peebles

Washington, D.C. USA

December 31, 2018

2 Comments

Filed under British History, French History, History, United States History

They Kept Us Out of War . . . Until They Didn’t

Michael Kazin, War Against War:

The American Fight for Peace, 1914-18 

            Earlier this month, Europe and much of the rest of the world paused briefly to observe the 100th anniversary of the day in 1918 when World War I, sill sometimes called the Great War, officially ended. In the United States, where we observe Veterans’ Day without explicit reference to World War I, this past November 11th constituted one of the rare occasions when the American public focused on the four-year conflict that took somewhere between 9 and 15 million lives, including approximately 116,000 Americans, and shaped indelibly the course of 20th century history.  In War Against War: The American Fight for Peace, 1914-18, Michael Kazin offers a contrarian perspective on American participation in the conflict.  Kazin, professor of history at Georgetown University and editor of the avowedly leftist periodical Dissent, recounts the history of the diverse groups and individuals in the United States who sought to keep their country out of the conflict when it broke out in 1914; and how those groups changed, evolved and reacted once the United States, under President Woodrow Wilson, went to war in April 1917.

            The opposition to World War I was, Kazin writes, the “largest, most diverse, and most sophisticated peace coalition to that point in U.S. history” (p.xi). It included pacifists, socialists, trade unionists, urban progressives, rural populists, segregationists, and crusaders for African-American rights.  Women, battling at the same time for the right to vote, were among the movement’s strongest driving forces, and the movement enjoyed support from both Democrats and Republicans.  Although the anti-war opposition had a decidedly anti-capitalist strain – many in the opposition saw the war as little more than an opportunity for large corporations to enrich themselves — a handful of well-known captains of American industry and finance supported the opposition, among them Andrew Carnegie, Solomon Guggenheim and Henry Ford.  It was a diverse and colorful collection of individuals, acting upon what Kazin describes as a “profoundly conservative” (p.xviii) impulse to oppose the build up of America’s military-industrial complex and the concomitant rise of the surveillance state.  Not until the Vietnam War did any war opposition movement approach the World War I peace coalition in size or influence.

            This eclectically diverse movement was in no sense isolationist, Kazin emphasizes. That pejorative term that had not yet come into popular usage.  Convinced that the United States had an important role to play on the world stage beyond its own borders, the anti-war coalition sought to create a “new global order based on cooperative relationships between nation states and their gradual disarmament” (p.xiv).  Its members hoped the United States would exert moral authority over the belligerents by staying above the fray and negotiating a peaceful end to the conflict.

             Kazin’s tells his story in large measure through admiring portraits of four key members of the anti-war coalition, each representing one of its major components: Morris Hillquit, a New York labor lawyer and a Jewish immigrant from Latvia, standard-bearer for the Socialist Party of America and left-wing trade unions; Crystal Eastman, a charismatic and eloquent New York feminist and labor activist, on behalf of women; and two legislative representatives, Congressman Claude Kitchen, a populist Democrat from North Carolina and an ardent segregationist; and Wisconsin Republican Senator Robert (“Fighting Bob”) LaFollette, Congress’ most visible progressive. The four disagreed on much, but they agreed that industrial corporations yielded too much power, and that the leaders of American industry and finance were “eager to use war and preparations for war to enhance their profits” (p.xiv).  Other well-known members of the coalition featured in Kazin’s story include Jane Addams, renowned social activist and feminist; William Jennings Bryan, Secretary of State under President Wilson, three-time presidential candidate, and Christian fundamentalist; and Eugene Debs and Norman Thomas, successively perennial presidential candidates of the Socialist Party of America.

            Kazin spends less time on the coalition’s opponents – those who had few qualms about entering the European conflict and, short of that, supported “preparedness” (always used with quotation marks): the notion that the United States needed to build up its land and naval capabilities and increase the size of its military personnel in the event that they might be needed for the conflict.  But those favoring intervention and “preparedness” found their voice in the outsized personality of former president Theodore Roosevelt, who mixed bellicose rhetoric with unadulterated animosity toward President Wilson, the man who had defeated him in a three-way race for the presidency in 1912.  After the United States declared war in April 1917, the former Rough Rider, then fifty-eight years old, sought to assemble his own volunteer unit and depart for the trenches of Europe as soon as the unit could be organized and trained.  To avoid this result, President Wilson was able to steer the Selective Service Act through Congress, establishing the national draft that Roosevelt had long favored – and Wilson had previously opposed.

             Kazin’s story necessarily turns around Wilson and his fraught relationship with the anti-war coalition. Stern, rigid, and frequently bewildering, Wilson was a firm opponent of United States involvement in the war when it broke out in 1914.  In the initial months of the conflict, Wilson gave the anti-war activists reason to think they had a sympathetic ear in the White House.  Wilson wanted the United States to stay neutral in the conflict so he could negotiate a lasting and just peace — an objective that the anti-war coalition fully endorsed.  He met frequently with peace groups and took care to praise their motives.  But throughout 1915, Wilson edged ever closer to the “preparedness” side. He left many on both sides confused about his intentions, probably deliberately so.  In Kazin’s interpretation, Wilson ultimately decided that he could be a more effective negotiator for a lasting and just peace if the United States entered the war rather than remained neutral. As the United States transitioned to belligerent, Wilson transformed from sympathizer with the anti-war coalition to its suppressor-in-chief. His transformation constitutes the most dramatic thread in Kazin’s story.

* * *

              The issue of shipping on the high seas precipitated the crisis with Germany that led Wilson to call for the United States’ entry into the war.  From the war’s outset, Britain had used its Royal Navy to prevent vessels from entering German ports, a clear violation of international law (prompting the quip that Britannia both “rules the waves and waives the rules” (p.25)).  Germany, with a far smaller naval force, retaliated by using its submarines to sink merchant ships headed for enemy ports.  The German sinking of the Cunard ocean liner RMS Lusitania off the coast of Ireland on May 7, 1915, killing more than 1,200 citizens, among them 128 Americans, constituted the beginning of the end for any real chance that the United States would remain neutral in the conflict.

            A discernible pro-intervention movement emerged in the aftermath of the sinking of the Lusitania, Kazin explains.  The move for “preparedness” was no longer just the cry of the furiously partisan or a small group of noisy hawks like Roosevelt.  A wide-ranging group suddenly supported intervention in Europe or, at a minimum, an army and navy equal to any of the belligerents.  Peace activists who had been urging their neutral government to mediate a settlement in the war “now faced a struggle to keep their nation from joining the fray” (p.62).

            After the sinking of the Lusitania, throughout 1916 and into the early months of 1917, “social workers and feminists, left-wing unionists and Socialists, pacifists and non- pacifists, and a vocal contingent of senators and congressmen from both major parties,” led by LaFollette and Kitchin, “worked together to stall or reverse the drive for a larger and more aggressive military” (p.63), Kazin writes.  The coalition benefited from the “eloquent assistance” of William Jennings Bryan, who had recently resigned as Secretary of State over Wilson’s refusal to criticize Britain’s embargo as well as Germany’s attacks on neutral vessels.

            In the aftermath of the sinking of the Lusitania, Wilson grappled with the issue of “how to maintain neutrality while allowing U.S. citizens to sail across the perilous Atlantic on British ships” (p.103).  Unlike the peace activists, Wilson “tempered his internationalist convictions with a desire to advance the nation’s power and status . . . As the crisis with Germany intensified, the idealism of the head of state inevitably clashed with that of citizens whose desire that America be right always mattered far more than any wish that it be mighty” (p.149).

            As events seemed to propel the United States closer to war in late 1916 and early 1917, the anti-war activists found themselves increasingly on the defensive.  They began to concentrate most of their energies on a single tactic: the demand for a popular referendum on whether the United States should go to war.  Although the idea gathered genuine momentum, there was a flagrant lack of support in Congress.  The activists never came up with a plausible argument why Congress should voluntarily give up or weaken its constitutional authority to declare war.

         In his campaign for re-election in 1916 against the Republican Party nominee, former Supreme Court Justice Charles Evans Hughes, Wilson ran as the “peace candidate,” dictated as much by necessity as desire.  “Few peace activists were ambivalent about the choice before them that fall,” Kazin writes.  “Whether as the lesser evil or a decent alterative, a second term seemed the only way to prevent Roosevelt . . . and [his] ilk from grabbing the reins of foreign policy” (p.124).  By September 1916, when Wilson left the White House for the campaign trail, he enjoyed the support of the “most left-wing, class-conscious coalition ever to unite behind a sitting president” (p.125).  Wilson eked out a narrow Electoral College victory in November over Hughes, with war opponents likely putting him over the top in three key states.

             Wilson’s re-election “liberated his mind and loosened his tongue” (p.141), as Kazin puts it.  In January 1917, he delivered to the United States Senate what came to be known as his “peace without victory” speech, in which he offered his vision for a “cooperative peace” that would “win the approval of mankind,” enforced by an international League of Peace. Borrowing from the anti-war coalition’s playbook, Wilson foreshadowed the famous 14 points that would became his basis for a peace settlement at the post-war 1919 Versailles Conference: no territorial gains, self-government and national self -determination for individual states, freedom of commerce on the seas, and a national military force for each state limited in size so as not to become an “instrument of aggression or of selfish violence” (p.141).  Wilson told the Senators that he was merely offering an extension of the United States’ own Monroe Doctrine.  But although he didn’t yet use the expression, Wilson was proposing nothing less than to make the world safe for democracy.  As such, Kazin notes, he was demanding “an end to the empires that, among them, ruled close to half the people of the world” (p.141).

           Wilson’s “stunning act of oratory” (p.142) earned the full support of the anti-war activists at home and many of their counterparts in Europe.  Most Republicans, by contrast, dismissed Wilson’s ideas as an “exercise in utopian thinking” (p.143). But, two months later, in March 1917, German U-boats sank three unarmed American vessels. This was the point of no return for Wilson, Kazin argues.  The president, who had “staked the nation’s honor and prosperity on protecting the ‘freedom of the seas,’ now believed he had no choice but to go to war” (p.172).  By this time, Wilson had concluded that a belligerent America could “end the conflict more quickly and, perhaps, spur ordinary Germans to topple their leaders, emulating their revolutionary counterparts in Russia.  Democratic nations, old and new, could then agree to the just and ‘cooperative’ peace Wilson had called for back in January.  By helping to win the war, the United States would succeed where neutrality had failed” (p.172).

* * *

           As the United States declared war on Germany in April 1917 (it never declared war on Germany’s allies Austria-Hungary and Turkey), it also seemed to have declared war on the anti-war coalition  and anyone else who questioned the United States’ role in the conflict.  The Wilson administration quickly turned much of the private sector into an appendage of the state, concentrating power to an unprecedented degree in the national government in Washington.  It persecuted and prosecuted opponents of the war effort with a ferocity few in the anti-war movement could have anticipated. “In no previous war had there been so much repression, legal and otherwise” (p.188), Kazin writes.  The Wilson administration, its allies in Congress and the judiciary all embraced the view that critics of the war had to “stay silent or suffer for their dissent” (p.189).  Wilson gave a speech in June 1917 in which he all but equated opposition with treason.

          The next day, Wilson signed into law the Espionage Act of 1917, designed to prohibit interference with military operations or recruitment as well as any support of the enemies of the United States during wartime.  The following year, Congress passed the even more draconian Sedition Act of 1918, which criminalized “disloyal, profane, scurrilous, or abusive language” about the government, the flag, or the “uniform of the armed forces” (p.246). The apparatus for repressing “disloyalty” had become “one tentacle of the newly potent Leviathan” (p.192).

            Kazin provides harrowing examples of the application of the Sedition Act.  A recent immigrant from Germany received a ten-year sentence for cursing Theodore Roosevelt and cheering a Germany victory on the battlefield.   Another served time for expressing his view that the conflict was a “rich man’s war and the United States is simply fighting for the money” (p.245); still another was prosecuted and jailed for charging that the United States Army was a “God damned legalized murder machine” (p.245).  Socialist Party and labor leader Eugene Debs received a ten-year sentence for telling party members – at a union picnic, no less – that their voices had not been heard in the decision to declare war.  The administration was unable to explain how repression of these relatively mild anti-war sentiments was helping to make the world safe for democracy.

            Many in the anti-war coalition, understandably, fell into line or fell silent, fearing that they would be punished for “refusing to change their minds” (p.xi). Most activists understood that, as long as the conflict continued, “resisting it would probably yield them more hardships than victories” (p.193).  Those continuing in the shrunken anti-war movement felt compelled to “defend themselves constantly against charges of disloyalty or outright treason” (p.243).  They fought to “reconcile their fear and disgust at the government’s repression with a hope that Wilson might still embrace a ‘peace without victory,’ even as masses of American troops made their way to France and into battle” (p.243).

           Representative Kitchin and Senator La Follette, the two men who had spearheaded opposition to the war in Congress, refrained from expressing doubts publicly about the war effort.  Kitchin, chairman at the time of the House of Representatives’ powerful Ways and Means Committee, nonetheless structured a revenue bill to finance the war by placing the primary burden on corporations that had made “excess profits” (p.244) from military contracts.  La Follette was forced to leave the Senate in early 1918 to care for his ill son, removing him from the storm that would have ensued had he continued to espouse his unwavering anti-war views.  Female activist Crystal Eastman helped create the National Civil Liberties Bureau, a predecessor to the American Civil Liberties Union, and started a new radical journal, the Liberator, after the government prohibited a previous publication from using the mails.  Socialist Morris Hilquit, like La Follette, was able to stay out of the line of fire in 1918 when he contracted tuberculosis and was forced out of New York City and into convalesce in the Adirondack Mountains, 300 miles to the north.

           Although the United States was formally at war with Germany for the last 19 months of a war that lasted over four years, given the time needed to raise and train battle ready troops it was a presence on the battlefield for only six months.  The tardy arrival of Americans on the killing fields of Europe was, Kazin argues, “in part, an ironic tribute to the success of the peace coalition in the United States during the neutral years” (p.260-61).  Hundreds of thousands of Americans would likely have been fighting in France by the summer of 1917 if Theodore Roosevelt and his colleagues and allies had won the fight over “preparedness” in 1915 and 1916.  “But the working alliance between radical pacifists like Crystal Eastman and progressive foes of the military like La Follette severely limited what the advocates of a European-style force could achieve – before Woodrow Wilson shed his own ambivalence and resolved that Americans had to sacrifice to advance self-government abroad and preserve the nation’s honor” (p.260-61).

          * * *

          Kazin’s energetic yet judicious work sheds valuable light on the diverse groups that steadfastly followed an alternate route for advancing self-government abroad – making the world safe for democracy — and preserving their nation’s honor.  As American attention to the Great War recedes in the aftermath of this month’s November 11th remembrances, Kazin’s work remains a timely reminder of the divisiveness of the conflict.

Thomas H. Peebles

La Châtaigneraie, France

November 16, 2018

 

13 Comments

Filed under American Politics, European History, History, United States History

Solitary Confrontations

 

Glenn Frankel, High Noon:

The Hollywood Blacklist and the Making of An American Classic 

            High Noon remains one of Hollywood’s most enduringly popular movies. The term “High Noon” is now part of our everyday language, meaning a “time of a decisive confrontation or contest,” usually between good and evil, in which good is often embodied in a solitary person.  High Noon is a fairly simple story, yet filled with tension.  The film takes place in the small western town of Hadleyville.  Former marshal Will Kane, played by Gary Cooper, is preparing to leave town with his new bride, Amy Fowler, played by Grace Kelly, when he learns that notorious criminal Frank Miller, whom Kane had helped send to jail, has been set free and is arriving with his cronies on the noon train to take revenge on the marshal.  Amy, a devout Quaker and a pacifist, urges her husband to leave town before Miller arrives, but Kane’s sense of duty and honor compels him to stay. As he seeks deputies and assistance among the townspeople, Kane is rebuffed at each turn, leaving him alone to face Miller and his gang in a fatal gunfight at the film’s end.

          High Noon came to the screen in 1952 at the height of Hollywood’s anti-communist campaign, best known for its practice of blacklisting, by which actors, writers, directors, producers, and others in the film industry could be denied employment based upon past or present membership in or sympathy for the American Communist Party.  Developed and administered by film industry organizations and luminaries, among them Cecil B. DeMille, John Wayne and future American president Ronald Reagan, blacklisting arose during the early Cold War years as Hollywood’s response to the work of the United States House of Representatives’ Committee on Un-American Activities, better known as HUAC.

            Until surpassed by Senator Joseph McCarthy, HUAC was the driving force in post World War II America’s campaign to uproot communists and communist sympathizers from all aspects of public life.  The Committee exerted pressure on Hollywood personnel with suspected communist ties or sympathies to avoid the blacklist by “cooperating” with the Committee, which entailed in particular “naming names” – identifying other party members or sympathizers.  Hollywood blacklisting had all the indicia of what we might today call a “witch hunt.” Blacklisting also came close to curtailing High Noon altogether.

         Glenn Frankel’s engrossing, thoroughly-researched High Noon: The Hollywood Blacklist and the Making of An American Classic captures the link between the film classic and Hollywood’s efforts to purge its ranks of present and former communists and sympathizers. Frankel places the anti-communist HUAC investigations and the Hollywood blacklisting campaign within the larger context of a resurgence of American political conservatism after World War II – a “right wing backlash” (p.45) — with the political right struggling to regain the upper hand after twelve years of New Deal politics at home and an alliance with the Soviet Union to defeat Nazi Germany during World War II.  There was a feeling then, as today, Frankel explains, that usurpers had stolen the country: “outsiders had taken control of the nation’s civil institutions and culture and were plotting to subvert its security and values” (p.x).   The usurpers of the post-World War II era were liberals, Jews and communists, and “self-appointed guardians of American values were determined to claw it back” (p.x).

          Hollywood, with its “extraordinary high profile” and “abiding role in our national culture and fantasies” (p.xi), was uniquely placed to shape American values and, to many, communists and Jews seemed to be doing an inordinate amount of the shaping.  In an industry that employed nearly 30,000 persons, genuine communists in Hollywood probably never exceeded 350, with screenwriters roughly half of the 350.  But 175 screenwriters, unless thwarted, could freely produce what right-wing politicians termed “propaganda pictures” designed to undermine American values.  Communists constituted a particularly insidious threat because they looked and sounded indistinguishable from others in the industry, yet were “agents of a ruthless foreign power whose declared goal was to destroy the American way of life” (p.x).  That a high portion of Hollywood’s communists were Jewish heightened suspicion of the Jews who, from Hollywood’s earliest days as the center of the film industry, had played an outside role as studio heads, screenwriters, and agents.  Jews in Hollywood were at once “uniquely powerful” and “uniquely vulnerable” to the attacks of anti-Semites who accused them of “using the movies to undermine traditional American values” (p.13).

            Frankel’s account of this struggle over security and values involves a multitude of individuals, primarily in Hollywood and secondarily in Washington, but centers upon the interaction between three: Gary Cooper, Stanley Kramer, and Carl Foreman.  Cooper was the star of High Noon and Kramer its producer.   Foreman wrote the script and was the film’s associate director until his refusal in September 1951 to name names before HUAC forced him to leave High Noon before its completion.  Foreman and Kramer, leftist leaning politically, were “fast-talking urban intellectuals from the Jewish ghettos of Chicago [Foreman] and New York [Kramer]” (p.xvi).  Foreman had been a member of the American Communist Party as a young adult in the 1930s until sometime in the immediate post-war years; Kramer’s relationship to the party is unclear in Frankel’s account.  Cooper was a distinct contrast to Foreman and Kramer in about every respect, a “tall, elegant, and reticent” (p.xvi) Anglo-Saxon Protestant from rural Montana, the product of conservative Republican stock who liked to keep a low profile when it came to politics.

            Although Cooper was the star of High Noon, Foreman emerges as the star in Frankel’s examination of HUAC investigations and blacklisting. Foreman saw his encounter with HUAC in terms similar to those which Cooper, as Will Kane, encountered in Hadleyville: he was the marshal, HUAC seemed like the gunmen coming to kill the marshal, and the “hypocritical and cowardly citizens of Hadleyville” found their counterparts in the “denizens of Hollywood who stood by passively or betrayed him as the forces of repression bore down” (p.xiii).  The filming of High Noon had begun a few days prior to Foreman’s testimony before HUAC and was completed in just 32 days, on what amounted to a shoestring budget of $790,000.  How the 84-minute black-and-white film survived Foreman’s departure constitutes a mini-drama within Frankel’s often gripping narrative.

* * *

         In most accounts, Hollywood’s blacklisting practices began in 1947, when ten writers and directors — the “Hollywood Ten” — appeared before HUAC and refused to answer the committee’s questions about their membership in the Communist Party.  They were cited for contempt of Congress and served time in prison.  After their testimony, a group of studio executives, acting under the aegis of the Association of Motion Picture Producers, fired the ten and issued what came to be known as the Waldorf Statement, which committed the studios to firing anyone with ties to the Communist Party, whether called to testify before HUAC or not.  This commitment in practice extended well beyond party members to anyone who refused to “cooperate” with HUAC.

           Neither Foreman nor Kramer was within HUAC’s sights in 1947.  At the time, the two had banded together in the small, independent Stanley Kramer Production Company, specializing in socially relevant films that aimed to attract “war-hardened young audiences who were tired of the slick, superficial entertainments the big Hollywood studios specialized in and [were] hungry for something more meaningful” (p.59).  In March 1951, Kramer Production became a sub-part of Columbia Pictures, one of Hollywood’s major studios.   In June of that year, while finishing the script for High Noon, Foreman received his subpoena to testify before HUAC.  The subpoena was an “invitation to an inquisition” (p.xii), as Frankel puts it.

           HUAC, in the words of writer David Halberstam, was a collection of “bigots, racists, reactionaries and sheer buffoons” (p.76). The Committee acted as judge, jury and prosecutor, with little concern for basic civil liberties such as the right of the accused to call witnesses or cross-examine the accuser.  Witnesses willing to cooperate with the Committee were required to undergo a “ritual of humiliation and purification” (p.xii), renouncing their membership in the Communist Party and praising the Committee for its devotion to combating the Red plot to destroy America.  A “defining part of the process” (p.xiii) entailed identifying other party members or sympathizers – the infamous “naming of names” — which became an end in itself for the HUAC, not merely a means to obtain more information, since the Committee already had the names of most party members and sympathizers working in Hollywood.  Forman was brought to the Committee’s attention by Martin Berkeley, an obscure screenwriter and ex-Communist who emerges as one of the book’s more villainous characters — Hollywood’s “champion namer of names” (p.241).

           Loath to name names, Foreman had few good options.  The primary alternative was to invoke the Fifth Amendment against self-incrimination and refuse to answer questions. But such witnesses appeared to have something to hide, and often were blacklisted for failure to cooperate with the Committee.  When he testified before HUAC in September 1951, Forman stressed that he loved his country as much as anyone on the Committee and used his military service during World War II to demonstrate his commitment to the United States.  But he would go no further, refusing to name names.  Foreman conceded for the record that he “wasn’t a Communist now, and hadn’t been one in 1950 when he signed the Screen Writers Guild loyalty oath” (p.201).  The Committee did not hold Foreman in contempt, as it had done with the Hollywood Ten.  But it didn’t take Foreman long to feel the consequences of his refusal to “cooperate.”

           Kramer, who had initially been supportive of Foreman, perhaps out of concern that Foreman might name him as one with communist ties, ended by acceding to Columbia Pictures’ position that Foreman was too tainted to continue to work for its subsidiary.  Foreman left Kramer Production with a lucrative separation package, more than any other blacklisted screenwriter. His attempt to start his own film production company went nowhere when it became clear that anyone working for the company would be blacklisted.  Foreman, a “committed movie guy” who “passionately believed in [films] as the most successful and popular art from ever invented” (p.218), was finished in Hollywood.  He and Kramer never spoke again.

* * *

            Kramer had had little direct involvement with the early shooting of High Noon. But after Foreman’s departure, he reviewed the film and was deeply dismayed by what he saw.  He responded by making substantial cuts, which he later claimed had “saved” the film.  But in Frankel’s account, Cooper rather than Kramer saved High Noon, making the film an unexpected success.  Prior to his departure, Foreman had suggested to Cooper, working for a fraction of his normal fee, that he consider withdrawing from High Noon to preserve his reputation.  Cooper refused. “You know how I feel about Communism,” Frankel quotes Cooper telling Foreman, “but you’re not a Communist now and anyhow I like you, I think you’re an honest man, and I think you should do what is right” (p.170-71).

            Kramer and Foreman were initially reluctant to consider Cooper for the lead role in High Noon.  At age fifty, he “looked at least ten years too old to play the marshal.  And Cooper was exactly the kind of big studio celebrity actor that both men tended to deprecate” (p.150).  Yet, Cooper’s “carefully controlled performance,” combining strength and vulnerability, gave not only his character but the entire picture “plausibility, intimacy and human scale” (p.252), Frankel writes.  Will Kane is “no superhuman action hero, just an aging, tired man seeking to escape his predicament with his life and his new marriage intact, yet knowing he cannot . . . It is a brave performance, and it is hard to imagine any other actor pulling it off with the same skill and grace” (p.252).  None of the “gifted young bucks” whom Kramer and Foreman would likely have preferred for the lead role, such as Marlon Brando, William Holden, or Kirk Douglas, could have done it with “such convincing authenticity, despite all their talent.  In High Noon, Gary Cooper is indeed the truth” (p.252).

            High Noon also established Cooper’s co-star, Grace Kelly, playing Marshal Kane’s new wife Amy in her first major film.  Kelly was some 30 years younger than Cooper and many, including Kramer, considered the pairing a mismatch. But she came cheap and the pairing worked. Katy Jurado, a star in her native Mexico, played the other woman in the film, Helen Ramirez, who had been the girlfriend of both Marshal Kane and his adversary Miller.  During the film, she is involved romantically with Kane’s feckless deputy, Harvey Pell, played by Lloyd Bridges.  High Noon was only Jurado’s second American film, but she was perfect in the role of a sultry Mexican woman.  By design, Foreman created a dichotomy between the film’s male hero — a man of “standard masculine characteristics, inarticulate, stubborn, adept at and reliant on gun violence” (p.253) — and its two women characters who do not fit the conventional models that Western films usually impose on female characters.  The film’s “sensitive focus on Helen and Amy – remarkable for its era and genre – is one of the elements that make it an extraordinary movie” (p.255), Frankel contends.

           Frankel pays almost as much attention to the movie’s stirring theme song, “Do Not Forsake Me, Oh My Darling,” sung by Tex Ritter, as he does to the film’s characters.  The musical score was primarily the work of Dimitri Tiomkin, a Jewish immigrant from the Ukraine, with lyricist Ned Washington providing the words.  The pair produced a song that could be “sung, whistled, and played by the orchestra all the way through the film, an innovative approach that had rarely been used in movies before ” (p.230). Ritter’s raspy voice proved ideally suited to the song’s role of building tension in the film (the better known Frankie Laine had a “more synthetic and melodramatic” (p.234) version that surpassed Ritter’s in sales).  The song’s narrator is Kane himself, addressing his new bride and expressing his fears and longings in music.  The song, whose melody is played at least 12 times during the movie, encapsulates the plot while explaining the marshal’s “inner conflict in a way that he himself cannot articulate” (p.232). Its repetition throughout the film reminds us that Kane’s life and happiness are “on the line, yet he cannot walk away from his duty” (p.250).

           Frankel also dwells on the use of clocks in the film to heighten tension as 12:00 o’clock, High Noon, approaches.  The clocks, which become bigger as the film progresses, “constantly remind us that time is running out for our hero.  They help build and underscore the tension and anxiety of his fruitless search for support.  There are no dissolves in High Noon – none of the usual fade-ins and fade-outs connoting the unseen passage of time – because time passes directly in front of us.  Every minute counts – and is counted” (p.250).

           High Noon was an instant success from the time it came out in the summer of 1952, an “austere and unusual piece of entertainment,” as Frankel describes the film, “modest, terse, almost dour . . . with no grand vistas, no cattle drives, and no Indian attacks, in fact no gunplay whatsoever until its final showdown.  Yet its taut, powerful storytelling, gritty visual beauty, suspenseful use of time, evocative music, and understated ensemble acting made it enormously compelling” (p.249).   But the film was less popular with critics, many of whom considered the film overly dramatic and corny.

          The consensus among the cognoscenti was that the film was “just barely disguised social drama using a Western setting and costumes,” as one critic put it, the “favorite Western for people who hate Westerns” (p.256).  John Wayne argued that Marshal Kane’s pleas for help made him look weak.  Moreover, Wayne didn’t like the negative portrayal of the church people, which he saw it as an attack on American values.  The American Legion also attacked the film on the ground that it was infected by the input of communists and communist sympathizers.

* * *

          After leaving the High Noon set, Foreman spent much of the 1950s in London, where he had limited success in the British film industry while his marriage unraveled.  For a while, he lost his American passport, pursuant to State Department policy of denying passports to anyone it had reason to suspect was Communist or Communist-leaning, making him a man without a country until a court overturned State Department policy.  Kramer left Columbia pictures after High Noon.  He went back to being an independent producer and in that capacity established a reputation as Hollywood’s most consistently liberal filmmaker.  To this day, the families of Foreman and Kramer, who died in 1984 and 2001, respectively, continue to spar over which of the two deserves more credit for High Noon’s success.  Cooper continued to make films after High Noon, most of them westerns of middling quality, “playing the same role over and over” (p.289) as he aged and his mobility grew more restricted.  He kept in touch with Foreman up until his death from prostate cancer in 1961.

* * *

         Frankel returns at the end of his work to Foreman’s view of High Noon as an allegory for the Hollywood blacklisting process — a single man seeking to preserve honor and confront evil alone when everyone around him wants to cut and run. But, Frankel argues, seen on the screen at a distance of more than sixty years, the film’s politics are “almost illegible.” Some critics, he notes, have suggested that Kane, rather than being a brave opponent of the blacklist, could “just as readily be seen as Senator Joseph McCarthy bravely taking on the evil forces of Communism while exposing the cowardice and hypocrisy of the Washington establishment” (p.259).  Sometimes a good movie is just a good movie.

Thomas H. Peebles

La Châtaigneraie, France

October 3, 2018

 

6 Comments

Filed under Film, Politics, United States History

Magic Moscow Moment

 

Stuart Isacoff, When the World Stopped to Listen:

Van Cliburn’s Cold War Triumph and Its Aftermath 

            Harvey Lavan Cliburn, Jr., known to the world as “Van,” was the pianist from Texas who at age 23 astounded the world when he won the first Tchaikovsky International Piano Competition in Moscow in 1958, at the height of the Cold War.  The Soviet Union, fresh from launching the satellite Sputnik into orbit the previous year and thereby gaining an edge on the Americans in worldwide technological competition, looked at the Tchaikovsky Competition as opportunity to showcase its cultural superiority over the United States.  Stuart Isacoff’s When the World Stopped to Listen: Van Cliburn’s Cold War Triumph and Its Aftermath takes us behind the scenes of the 1958 competition to show the machinations that led to Cliburn’s selection in Moscow.

            They are intriguing, but come down to this: the young Cliburn was so impossibly talented, so far above his fellow competitors, that the competition’s jurors concluded that they had no choice but to award him the prize.  But before the jurors announced what might have been considered a politically incorrect decision to give the award to an American, they felt compelled to present their dilemma to Soviet party leader and premier Nikita Khrushchev. Considered, unfairly perhaps, a country bumpkin lacking cultural sophistication, Khrushchev asked who had been the best performer.  The answer was Cliburn.  According to the official Soviet version, Khrushchev responded with a simple, straightforward directive: “Then give him the prize” (p.156).

            Isacoff, a professional pianist as well as an accomplished writer, suggests that there was more to Khrushchev’s directive than what the official version allows.  But his response and the official announcement two days later, on April 14, 1958, that Cliburn had won first place make an endearing high point to Isacoff’s spirited biography.  The competition in Moscow and its immediate aftermath form the book’s core, about 60%. Here, Isacoff shows how Cliburn became a personality known worldwide — “the classical Elvis” and “the American Sputnik” were just two of the monikers given to him – and how his victory contributed appreciably to a thaw in Cold War tensions between the United States and the Soviet Union. The remaining 40% of the book is split roughly evenly between Cliburn’s life prior to the Moscow competition, as a child prodigy growing up in Texas and his ascendant entry into the world of competitive piano playing; and his post-Moscow life, fairly described as descendant.

            Cliburn never recaptured the glory of his 1958 moment in Moscow, and his life after receiving the Moscow prize was a slow but steady decline, up to his death from bone cancer in 2013.  For the lanky, enigmatic Texan, Isacoff writes, “triumph and decline were inextricably joined” (p.8).

* * *

            Cliburn was born in 1934, in Shreveport, Louisiana, the only child of Harvey Lavan Cliburn, Sr., and Rildia Bee O’Bryan Cliburn.  When he was six, he moved with his parents from Shreveport to the East Texas town of Kilgore.  Despite spending his earliest years in Louisiana, Cliburn always considered himself a Texan, with Kilgore his hometown.   Cliburn’s father worked for Magnolia Oil Company, which had relocated him from Shreveport to Kilgore, a rough-and-tumble oil “company town.”  We learn little about the senior Cliburn in this biography, but mother Rildia Bee is everywhere. She was a dominating presence upon her son not only in his youthful years but also throughout his adult life, up to her death in 1994 at age 97.

        Prior to her marriage, Rildia had been a pupil of renowned pianist Arthur Friedheim.  It was Southern mores, Isacoff suggests, that discouraged her from pursuing what looked like her own promising career as a pianist.  But with the arrival of her son, she found a new outlet for her seemingly limitless musical energies.  Rildia was “more teacher than nurturer” (p.12), Isacoff writes, bringing discipline and structure to her son, who had started playing the piano around age 3.  From the start, the “sonority of the piano was entwined with his feelings for his mother” (p.12).  By age 12, Cliburn had won a statewide piano contest, and had played with the Houston Symphony Orchestra in a radio concert.  In adolescence, with his father fading in importance, Cliburn’s mother continued to dominate his life. “Despite small signs of teenage waywardness, when it came to his mother, Van was forever smitten” (p.21).

               In 1951, at age 17, Rildia and Harvey Sr., sent their son off to New York to study at the prestigious Juilliard School, a training ground for future leaders in music and dance.  There, he became a student of the other woman in his life, Ukraine-born Rosina Lhévinne, a gold-medal graduate of the Moscow Conservatory whose late husband Josef had been considered one of the world’s greatest pianists.  Like Rildia, Lhévinne too was a force of nature, a powerful influence on the young Cliburn.  Improbably, Lhévinne and Rildia for the most part saw eye to eye on the best way to nurture the talents of the prodigious young man.  Both women focused Cliburn on “technical finesse and beauty of sound rather than on musical structure,” realizing that his best qualities as a pianist “rested on surface polish and emotional persuasiveness” (p.54).  Each recognized that for Cliburn, music would always be “visceral, not abstract or academic.  He played the way he did because he felt it in the core of his being” (p.34).

           More than Rildia, Lhévinne was able to show Cliburn how to moderate and channel these innate qualities.  Without her stringent guidance, Isacoff indicates, Cliburn might have lapsed into “sentimentality, deteriorating into the pianistic mannerisms of a high romantic” (p.56). Although learning through Lhévinne to hold his interpretative flourishes in check, Cliburn’s “overriding personality – emotionally exuberant, and unshakably sentimental – was still present in every bar” (p.121).  By the time he left for the Moscow competition, Cliburn had demonstrated a “natural ability to grasp and convey the meaning of the music, to animate the virtual world that arises through the art’s subtle symbolic gestures. It set him apart” (p.18).

          During his Julliard years in New York, the adult Cliburn personality the world would soon know came into view: courteous and generous, sentimental and emotional.  He had by then also developed the idiosyncratic habit of being late for just about everything, a habit that continued throughout his life.  Isacoff mentions one concert after another in which Cliburn was late by periods that often became a matter of hours.  Both in the United States and abroad, he regularly compensated for showing up late by beginning with America’s national anthem, “The Star Spangled Banner.”  At Juilliard, Cliburn also began a smoking habit that stayed with him for the remainder of his life.  Except when he was actually playing — when he had the habit of looking upward, “as if communing with the heavens whenever the music reached an emotional peak” (p.6) — it was difficult to get a photo of him without a cigarette in his hands or mouth.

           It may have been at Juilliard that Clliburn had his first homosexual relationship, although Isacoff plays down this aspect of Cliburn’s early life.  He mentions Cliburn’s experience in high school dating a girl and attending the senior prom.  Then, a few pages later, he notes matter-of-factly that a friendship with a fellow male Juilliard student had “blossomed into romance” (p.35).  But there are many questions about Cliburn’s sexuality that seem pertinent to our understanding of the man.  Did Cliburn undergo any of the torment that often accompanies the realization in adolescence that one is gay, especially in the 1950s?  Did he “come out” to his friends and acquaintances, in Texas or New York, or did he live the homosexual life largely in the closest?  Were his parents aware of his sexual identity and if so, what was their reaction?  None of these is treated here.

            With little fanfare, Juilliard nominated Cliburn in early 1958 for the initial Tchaikovsky International Competition, taking advantage of an offer of the Rockefeller Foundation to pay travel expenses for one entrant in each of the competition’s two components, piano and violin.  The Soviet Union, which paid the remaining expenses for the participants, envisioned a “high-culture version of the World Cup, pitting musical talents from around the globe against one another” (p.4). The Soviets confidently assumed that showcasing its violin and piano expertise after its technological success the previous year with the Sputnik launch would provide another propaganda victory over the United States.

            Soviet pianists who wished to enter the competition had to pass a daunting series of tests, musical and political, to qualify for the competition, with training similar to that of its Olympic athletes.  Many of the Soviet Union’s emerging piano stars were reluctant to jump into the fray.  Each had a specific reason, along with a “general reluctance to become involved in the political machinations of the event” (p.59).  Lev Vlassenko, a “solid, well-respected pianist” who became a lifetime friend of Cliburn in the aftermath of the competition, emerged as the presumptive favorite, “clearly destined to win” (p.60).

            On the American side, the US State Department only reluctantly gave its approval to the competition, fearing that it would be rigged.  The two pianists whom the Soviets considered the most talented Americans, Jerome Lowenthal and Daniel Pollack, traveled to Moscow at their own expense, unlike Cliburn (pop singer Neil Sedaka was among the competitors for the US but was barred by the Soviets as too closely associated with decadent rock ‘n roll; they undoubtedly did Sedaka a favor, as his more lucrative pop career was just then beginning to take off).  Other major competitors came from France, Iceland, Bulgaria, and China.

            For the competition’s first round, Cliburn was assigned pieces from Bach, Mozart, Chopin, Scriabin, Rachmaninoff, Liszt and Tchaikovsky.  The audience at the renowned Moscow Conservatory, where the competition took place, fell from the beginning for the Texan and his luxurious sound. They “swooned at the crooner in him . . . Some said they discerned in his playing a ‘Russian soul’” (p.121).  But among the jurors, who carried both political and aesthetic responsibilities, reaction to Cliburn’s first round was mixed.  Some were underwhelmed with his renditions of Mozart and Bach, but all found his Tchaikovsky and Rachmaninoff “out of this world,” as one juror put it (p.120).

          Isacoff likens the jurors’ deliberations to a round of speed dating, “where the sensitive antennae of the panelists hone in on the character traits of each candidate. . . There is no magical formula for choosing a winner; in the end, the decision is usually distilled down to a basic overriding question: Do I want to hear this performer again?”(p.117).  Famed pianist Sviatoslav Richter, who served on the jury, emerges here as the equivalent of the “hold out juror” in an American criminal trial, “willing to create a serious ruckus when he felt that the deck was being stacked against the American.  As the competition progressed, his fireworks in the jury room would be every bit the equal of the ones onstage” (p.114).

            Cliburn’s second round program was designed to show range.  Beethoven, Chopin and Brahms were the heart of a romantic repertoire.  He also played the Prokofiev Sixth, a modernist piece that reflected the political tensions and fears of 1940 Russia.  Cliburn received a 15-minute standing ovation at the end of the round, the audience voting literally with its feet and hands.  In the jury room, Richter continued to press the case for Cliburn, although the jury ranked him only third, tied with Soviet pianist Naum Shtarkman. Overall, Vlassenko ranked first and eminent Chinese pianist Shikun Liu second.

            But in the third round, Cliburn blew the competition away.  The round  began with Tchaikovsky’s First Piano Concerto, for which Cliburn delivered an “extraordinary” interpretation, with every tone “imbued with an inner glow, with long phrases concluding in an emphatic, edgy pounce. The effect was simply breathtaking” (p.146). Cliburn’s rendition of Rachmaninoff’s “treacherously difficult” (p.147) Piano Concerto no. 3 was even more powerful.  In prose that strains to capture Cliburn’s unique brilliance, Isacoff explains:

After Van, people would never again hear this music the same way. . . There is no simple explanation for why in that moment Van played perhaps the best concert of his life. Sometimes a performer experiences an instant of artistic grace, when heaven seems to open up and hold him in the palm of its hand – when the swirl of worldly sensations gives way to a pervasive, knowing stillness, and he feels connected to life’s unbroken dance.  If that was not exactly Van’s experience when playing Rachmaninoff Concerto no. 3, it must have come close (p.146-47).

         Cliburn had finally won over even the most recalcitrant jurors, who briefly considered a compromise in which Cliburn and Vlassenko would share the top prize.  But the final determination was left to premier Khrushchev.  The Soviet leader’s instantaneous and decisively simple response quoted above was the version released to the press.  But with the violin component of the competition going overwhelmingly to the Soviets, the ever-shrewd Khrushchev appears to have concluded that awarding the piano prize to the American would underscore the competition’s objectivity and fairness.  One advisor recalled Khrushchev saying to her: “The future success of this competition lies in one thing: the justice that the jury gives” (p.156).  The jury’s official and public decision of April 14, 1958 had Cliburn in first place, with Vlassenko and Liu sharing second.  Cliburn could not have accomplished what he did, Isacoff writes, without Khrushchev, his “willing partner in the Kremlin” (p.206).

        Cliburn had another willing partner in Max Frankel, then the Moscow correspondent for the New York Times (and later, its Executive Editor). Frankel had sensed a good story during the competition and reported extensively on all its aspects.  He also pushed his editors back home to put his dispatches on page 1.  One of his stories forthrightly raised the question whether the Soviets would have the courage to award the prize to Cliburn.  For Isacoff, Frankel’s reporting and the pressure he exerted on his Times editors to give it a prominent place also contributed to the final decision.

             After his victory in Moscow, Cliburn went on an extensive tour within the Soviet Union. To the adoring Russians, Cliburn represented the “new face of freedom.” Performing under the auspices of a repressive regime, he “seemed to answer to no authority other than the shifting tides of his own soul” (p.8).  Naïve and politically unsophisticated, Cliburn raised concerns at the State Department when he developed the habit of describing the Russians as “my people,” comparing them to Texans and telling them that he had never felt so at home anywhere else.

          A month after the Moscow victory, Cliburn returned triumphantly to the United States amidst a frenzy that approached what he had generated in the Soviet Union.  He became the first (and, as of now, only) classical musician to be accorded a ticker tape parade in New York City, in no small measure because of lobbying by the New York Times, which saw the parade as vindication for its extensive coverage of the competition.

          After Cliburn’s Moscow award, the Soviet Union and the United States agreed to host each other’s major exhibitions in the summer of 1959.  It started to seem, Isacoff writes, that “after years of protracted wrangling, a period of true detente might actually be dawning” (p.174).   The cultural attaché at the American Embassy in Moscow wrote that Cliburn had become a “symbol of the unifying friendship that overcomes old rivalries.  . . a symbol of art and humanity overruling political pragmatics” (p.206).

           A genuine if improbable bond of affection had developed in Moscow between Khrushchev and Cliburn. That bond endured after Cold War relations took several turns for the worse, first after the Soviets shot down the American U-2 spy plane in 1960, followed by erection of the Berlin Wall in 1961, and the direct confrontation in 1962 over Soviet placement of missiles in Cuba. The bond even continued after Khrushchev’s fall from power in 1964, indicating that it had some basis beyond political expediency.

           But Cliburn’s post-Moscow career failed to recapture the magic of his spring 1958 moment.  The post-Moscow Cliburn seemed to be beleaguered by self-doubt and burdened by psychological tribulations that are not fully explained here.  “Everyone had expected Van’s earlier, youthful qualities to mature and deepen over time,” Isacoff writes.  But he “never seemed to grow into the old master they had hoped for . . . At home, critics increasingly accused Van of staleness, and concluded he was chasing after momentary success with too little interest in artistic growth” (p.223).  Even in the Soviet Union, where he made several return visits, critics “began to complain of an artistic decline” (p.222).  In these years, Cliburn “developed an enduring fascination with psychic phenomena and astrology that eventually grew into an obsession. The world of stargazing became a vital part of his life” (p.53).

           Cliburn’s mother remained a dominant force in his life throughout his post-Moscow years, serving as his manager until she was almost 80 years old.  As she edged toward 90, she and her son continued to address one another as “Little Precious” and “Little Darling” (p.230).  Her death at age 97 in 1994 was predictably devastating for Cliburn. In musing about his mother’s effect on Cliburn’s career trajectory, Isacoff wonders whether Rildia Bee, the “wind that filled his sails” might also have been the “albatross that sunk him” (p.243).  While many thought that Cliburn might collapse with the death of his mother, by this time he was in a relationship with Tommy Smith, a music student 29 years younger.  With Smith, Cliburn had “at last found a fulfilling, loving union” (p.242). Smith traveled regularly with Cliburn, even accompanying him to Moscow in 2004, where none other than Vladimir Putin presented Cliburn with a friendship award.  Smith was at Cliburn’s side throughout his battle with bone cancer, which took the pianist’s life in 2013 at age 79.

* * *

            Tommy Smith became the happy ending to Cliburn’s uneven life story — a story which for Isacoff resembles that of a tragic Greek hero who “rose to mythical heights in an extraordinary victory that proved only fleeting, before the gods of fortune exacted their price” (p.8).

Thomas H. Peebles

La Châtaigneraie, France

September 5, 2018

 

1 Comment

Filed under American Politics, History, Music, Soviet Union, United States History

Two Who Embodied That Sweet Soul Music

 

Jonathan Gould, Otis Redding: An Unfinished Life 

Tony Fletcher, In the Midnight Hour: The Life and Soul of Wilson Pickett 

        By 1955, the year I turned 10, I had already been listening to popular music for a couple of years on a small bedside radio my parents had given me. My favorite pop singers were Patti Page and Eddie Fisher, whose soft, staid, melodious songs seemed in tune with the Big Band and swing music of my parents’ generation. The previous year, 1954, a guy named Bill Haley had come out of nowhere onto the popular music scene with “Rock Around the Clock,” which he followed in 1955 with “Shake, Rattle, and Roll.” Haley’s two hits became the centerpiece of my musical world. They were so different: they moved, they jumped – they rocked and they rolled! – in a way that resembled nothing I had heard from Page, Fisher and their counterparts.

        The term “rock ‘n roll” was already in use in 1955 to describe the new style that Haley’s songs represented. But “Rock Around the Clock” and “Shake, Rattle, and Roll” were not the only hit tunes I listened to that year that seemed light years apart from what I had been familiar with. There was Ray Charles, with “I Got a Woman;” Chuck Berry with “Maybelline;” and, most exotic of all, a man named Richard Penniman, known in the record world as “Little Richard,” rose to fame with a song titled “Tutti Frutti.” What I didn’t realize then was that Charles, Berry and Penniman were African-Americans, whereas Haley was a white guy, and that Charles and his counterparts were bringing their brand of popular music, then officially called “rhythm and blues” (and more colloquially “R & B”) into the popular music mainstream on a massive scale for white listeners like me.  Within a decade after that breakthrough year of 1955, “soul music” had largely supplanted “rhythm and blues” as the term of choice to refer to African-American popular music.

          Also listening to Charles, Berry and Penniman in 1955 were two African-American teenagers from the American South, both born in 1941, both named for their fathers: Otis Redding, Jr., and Wilson Pickett, Jr.  Redding was from Macon, Georgia (as was “Little Richard” Penniman). Pickett was from rural Alabama, but lived a substantial part of his adolescence with his father in Detroit. Each had already shown talent for gospel singing, which was then becoming a familiar pathway for African-Americans into secular rhythm and blues, and thus into the burgeoning world of popular music. A decade later, the two found themselves near the top of a staggering alignment of talent in the popular music world.

          As I look back at the period that began in 1955 and ended around 1970, I now see a golden era of American popular music.  It saw the rise of Elvis Presley, the Beatles, the Rolling Stones, and Bob Dylan, along with oh so many stellar practitioners of that “sweet soul music,” to borrow from the  title of a 1967 hit which Redding helped develop. Ray Charles, Chuck Berry, and Little Richard Penniman may have jump-started the genre in that pivotal year 1955, but plenty of others were soon competing with these pioneers: Sam Cooke, James Brown (another son of Macon, Georgia), Fats Domino, Marvin Gaye, the Platters, the Temptations, Clyde McPhatter and later Ben E. King and the Drifters, Curtis Mayfield and the Impressions, and Smokey Robinson and the Miracles were among the most prominent male stars, while Aretha Franklin, Mary Wells, Dionne Warwick, the Marvellettes, the Shirelles, Diana Ross and the Supremes, and Martha Reeves and the Vandellas were among the women who left their imprint upon this golden era.

          But if I had to pick two songs that represented the quintessence of that sweet soul music in this golden era, my choices would be Pickett’s “In the Midnight Hour,” and Redding’s “Sittin’ on the Dock of the Bay,” two songs that to me still define and embody soul music. Two recent biographies seek to capture the men behind these irresistible voices: Jonathan Gould’s Otis Redding: An Unfinished Life, and Tony Fletcher’s In the Midnight Hour: The Life and Soul of Wilson Pickett.  Despite Redding and Pickett’s professional successes, their stories are both sad, albeit in different ways.

         Gould’s title reminds us that Redding died before the end of the golden age, in a plane crash in Wisconsin in December 1967, at age 26, as his career was soaring.  Pickett in Fletcher’s account had peaked by the end of the 1960s, with his career thereafter going into a steep downward slide. Through alcohol and drugs, Pickett destroyed himself and several people around him. Most tragically, Pickett physically abused the numerous women in his life. Pickett died in January 2006 at age of 64 of a heart attack, most likely brought about at least in part by years of substance abuse.

        Popular music stars are rarely like poets, novelists, even politicians who leave an extensive written record of their thoughts and activities.   The record for most pop music stars consists primarily of their records.  Gould, more handicapped than Fletcher in this regard given Redding’s premature death in 1967, gets around this obstacle by producing a work that is only about one-half Otis Redding biography.  The other half of his work provides a textbook overview of African-American music in the United States and its relationship to the condition of African-Americans in the United States.

        Unlike many of their peers, neither Redding nor Pickett manifested much outward interest in the American Civil Rights movement that was underway as their careers took off and peaked. But the story of African-American singers beginning their careers in the 1950s and rising to prominence in the lucrative world of 1960s pop music cannot be told apart from that movement.  At every phase of his story of Otis Redding, Gould reminds readers what was going on in the quest for African-American equality: Rosa Parks and the Montgomery bus boycott, Dr. Martin Luther King’s marches, Civil Rights legislation passed under President Lyndon Johnson, and the rise of Malcolm X’s less accommodating message about how to achieve full equality are all part of Gould’s story, as are the day-to-day indignities that African-American performers endured as they advanced their careers.  Fletcher does not ignore this background – no credible biographer of an African-American singer in the ‘50s and ‘60s could – but it is less prominent in his work.

        More than Fletcher, Gould also links African-American music to African-American history.  He treats the role music played for African-Americans in the time of slavery, during Reconstruction, during the Jim Crow era, and into the post-World War II and modern Civil Rights era. Gould’s overview of African-American history through the lens of African-American music alone makes his book worth reading, and may give it particular appeal to readers from outside the United States who know and love American R&B and soul music, but are less familiar with the historical and sociological context in which it emerged.  But both writers provide lively, detailed accounts of the 1950s and 1960s musical scene in which Redding and Pickett rose to prominence.  Just about every soul music practitioner whom I admired in that era makes an appearance in one or both books.  The two books should thus constitute a welcome trip down memory lane for those who still love that sweet soul music.

* * *

        Otis Redding grew up in a home environment far more stable than that of Wilson Pickett.  Otis was the fourth child, after three sisters, born to Otis Sr. and his wife Fannie. Otis Sr. had serious health issues, but worked while he could at Robbins Air Force base, just outside Macon, Georgia.  Although only minimally educated, Otis Sr. and Fannie saw education as the key to a better future for their children.  They were particularly disappointed when Otis Jr. showed little interest in his studies and dropped out of high school at age 15. As an adolescent, Otis Jr. was known as a “big talker and a good talker, someone who could ‘run his mouth’ and hold his own in the endless arguments and verbal contests that constituted a prime form of recreation among people who quite literally didn’t have anything better to talk about” (Gould, p.115; hereafter “G”).

        Wilson Pickett was one of 11 children born into a family of sharecroppers, barely surviving in the rigidly segregated world of rural Alabama.  When Wilson, Jr. was seven, his father took the family to Detroit, Michigan, in search of a better life, and landed a job at Ford Motor Company. But the family came apart during the initial time in Detroit. His mother Lena returned to Alabama, and young Wilson ended up spending time in both places.  Wilson was subject to harsh discipline at home at the hands of both his mother and his father and grew into an irascible young man, quick to anger and frequently involved as an adolescent in physical altercations with classmates and friends.  His irascibility “provoked ever more harsh lashings, and because these still failed to deter him, it created an especially vicious cycle,” Fletcher writes, with the excessive violence Wilson later perpetrated on others representing a “continuation of the way he had been raised” (Fletcher, p.17; hereafter “F”). For a while, Pickett attended Detroit’s Northwestern High School, where future soul singers Mary Wells and Florence Ballard were also students. But Pickett, like Redding, did not finish high school.

         Both married young. Otis married his childhood sweetheart Zelma Atwood at about the time he should have been graduating from high school, when Zelma was pregnant with their second child.  Otis arrived more than an hour late for his wedding. Despite this less-than-promising beginning, he stayed married to Zelma for the remainder of his unfinished life and became a loyal and dedicated father to two additional children. Pickett married his girlfriend Bonnie Covington at age 18, when she too was pregnant. The couple stayed technically married until 1986, but spent little time together. Pickett’s relationships with his numerous additional female partners throughout his adult life all ended badly.

        Pickett discovered his singing talent through gospel music both in church in rural Alabama and on the streets of Detroit.  In the rigidly segregated South, Fletcher explains, the African-American church provided schooling, charity and community, along with an opportunity to listen to and participate in music.  Gospel was often the only music that young African-Americans in the 1940s and early 1950s were exposed to. “No surprise, then, that for a young Wilson Pickett, gospel music became everything” (F., p.18).  Similarly, it was “all but inevitable that Otis Redding would chose to focus his early musical energies on gospel singing” (G., p.62) at the Baptist Church in Macon which his parents attended.

       Redding gained attention as a 16-year old for his credible imitations of Little Richard. Soon after, he was able to replicate fluently the major R & B songs of the late 1950s. Through a neighborhood friend, Johnny Jenkins, a skilled guitarist, Redding joined a group called the Pinetoppers which played at local African American clubs – dubbed the “Chitlin’ circuit” – and earned money playing at all white fraternity parties at Mercer University in Macon and the University of Georgia.  Redding also spent a short time in Los Angeles visiting relatives, where he fell under the spell of Sam Cooke. Pickett started singing professionally in Detroit with a group known as the Falcons, which also featured Eddie Floyd, who would later go on to record “Knock on Wood,” a popular hit of the mid-60s.  Pickett’s first solo recording came in 1962, “If You Need Me.”

          Redding and Pickett in these two accounts had little direct interaction, and although they looked upon one another as rivals as their careers took off, each appears to have had a high degree of respect for the other. But each had a contract with Atlantic Records, and their careers thus followed closely parallel tracks.  Based in New York, Atlantic signed and marketed some of the most prominent R & B singers of the late 1950s and early 1960s, including Ray Charles and Aretha Franklin (whose charms were felt by both Redding and Pickett), along with several leading jazz artists and a handful of white singers. By the mid-1960s, Atlantic and its Detroit rival, Berry Gordy’s Motown Records, dominated the R & B sector of American popular music.

       Both men’s careers benefitted from the creative marketing of Jerry Wexler, who joined Atlantic in 1953 after working for Billboard Magazine (where he had coined the term “rhythm and blues” to replace “race music” as a category for African American music). Atlantic and Wexler cultivated licensing arrangements with smaller recording companies where both Redding and Pickett recorded, including Stax in Memphis, Tennessee, and Fame in Muscle Shoals, Alabama.  Redding and Pickett’s relationships with Wexler at Atlantic, and with a colorful cast of characters at Stax and Fame, play a prominent part in the two biographies.

          But the most affecting business relationship in the two books is that which Redding established with Phil Walden, his primary manger and promoter during his entire career. Walden, a white boy from Macon the same age as Redding, loved popular music of all types and developed a particular interest in the burgeoning rhythm and blues style.  Phil initially booked Otis to sing at fraternity parties at all-white Mercer University in Macon, where he was a student, and somehow the two young men from different worlds within the same hometown bonded. Gould uses the improbable Redding-Walden relationship to illustrate how complex black-white relationships could be in the segregated South, and how the two young men navigated these complexities to their mutual benefit.

       In 1965, Pickett produced his first hit, “In the Midnight Hour,” “perhaps the song most emblematic of the whole southern soul era” (F., p.74). The song appealed to the same white audiences that were listening the Beatles, the Rolling Stones and the other British invasion bands. It was “probably the first southern soul recording to have such an effect on such a young white audience,” Fletcher writes, “yet it was every bit an authentic rhythm and blues record too, the rare kind of single that appealed to everyone without compromising” (F., p.76).

         Pickett had had three major hits the following year, 1966: “634-5789,” “Land of 1,000 Dances,” and “Mustang Sally.” The first two rose to #1 on the R & B charts.  Although “634-5789” was in Fletcher’s terms a “blatant rip-off” of the Marvellettes’ “Beechwood 4-5789” and the “closest Pickett would ever come to sounding like part of Motown” (F., p.80), it surpassed “In the Midnight Hour” in sales. In 1968, Pickett turned the Beatles’ “Hey Jude” into his own hit. He also made an eye-opening trip to the newly independent African nation of Ghana, as part of a “Soul to Soul” group that included Ike and Tina Turner and Roberta Flack.  Pickett’s “In the Midnight Hour” worked the 100,000 plus crowd into a frenzy, Fletcher recounts. Pickett was the “ticket that everyone wanted to see” (F., p.169) and his performance in Ghana may have marked his career’s high point (although the tour included an embarassing low point when Pickett and Ike Turner got into a fight over dressing room priorities).

          “Dock of the Bay,” the song most closely identified with Otis Redding, was released in 1968, and became the only posthumous number one hit in American music history.  At the time of his death in late 1967, Redding had firmly established his reputation with a remarkable string of hits characterized by powerful emotion and depth of voice: “Try a Little Tenderness,” “These Arms of Mine,” “Pain in My Heart,” “Mr. Pitiful,” and “I’ve Been Loving You Too Long.” Like Pickett’s “Hey Jude,” a Beatles’ hit, Redding also “covered,” to use the music industry term, the Rolling Stones’ signature hit, “Satisfaction,” with his own idiosyncratic version.  Pickett’s “Hey Jude,” and Redding’s “Satisfaction,” the two authors note, deftly reversed a trend in popular music, in which for years white singers had freely appropriated African-American singers’ work.

         Gould begins his book with what proved to be the high water mark of Redding’s career, his performance at the Monterey Pop Festival in June 1967. There, he mesmerized the mostly white audience – “footloose college students, college dropouts, teenaged runaways, and ‘flower children’” (G., p.1) – with an electrifying five-song performance, “song for song and note for note, the greatest performance of his career” (G., p.412).  The audience, which had come to hear the Jefferson Airplane, Janis Joplin and Jimi Hendrix, rewarded Redding with an uninterrupted 10 minute standing ovation.

          After Monterey, Redding developed throat problems that required surgery.  During his recuperation, he developed “Dock of the Bay.” Gould sees affinities in the song to the Beatles’ “A Day in the Life.” Otis was seeking a new form of musical identity, Gould contends, becoming more philosophical and introspective, “shedding his usual persona of self-assurance and self-assertion in order to convey the uncertainty and ambivalence of life as it is actually lived”(G. p.447).

          Redding’s premature death, Gould writes, “inspired an outpouring of publicity that far exceeded the sum of what was written about him during his life” (G., p.444). Both writers quote Jerry Wexler’s eulogy: Otis was a “natural prince . . . When you were with him he communicated love and a tremendous faith in human possibility, a promise that great and happy events were coming” (G., p.438; F., p.126). There is a tinge of envy in Fletcher’s observation that Otis’ musical reputation remained “untarnished – preserved at its peak by his early death” (F., p.126).

          Pickett’s story is quite the opposite.  Although he had a couple of mid-level hits in the 1970s, Pickett’s life entered into its own long, slow but steady demise in the years following Redding’s death.  Pickett drank excessively while becoming a regular cocaine consumer during these years. His father had struggled with alcohol, and Pickett exhibited all the signs of severe alcoholism, including heavy morning drinking. Fletcher describes painful instances of domestic violence perpetrated against each of the women with whom Pickett lived.  He was the subject of numerous civil complaints and served some jail time for domestic violence offenses.  Of course, Redding might have gone into a decline as abrupt as that of Pickett had he lived longer; his career might have plateaued and edged into mediocrity, like Pickett’s; and his personal life might have become as messy as Pickett’s.  We’ll never know.

* * *

          Pickett was far from the only star whose best songs were behind him as the 1970s dawned.  Elvis comes immediately to mind, but the same could be said of the Beatles and the Rolling Stones. Barry Gordy moved his Motown operation from Detroit to Los Angeles in 1972, where it never recaptured the spark it had enjoyed . . . in Motown.   By 1970, a harder form of rock, intertwined with the psychedelic drug culture, was in competition with that sweet soul music. The 1960s may have been a turbulent decade but the popular music trends that began in 1955 and culminated in that decade were, as Gould aptly puts it, “graced by the talents of an incomparable generation of African-American singers” (G., p.465). The  biographies of Otis Redding and Wilson Pickett take us deeply into those times and its unsurpassed music. It was fun while it lasted.

Thomas H. Peebles

Marseille, France

February 26, 2018

P.S. For an audio trip down memory lane, please click these links:

https://www.youtube.com/watch?v=rTVjnBo96Ug

https://www.youtube.com/watch?v=FGVGFfj7POA

https://www.youtube.com/watch?v=sp3JOzcpBds

 

 

4 Comments

Filed under American Society, Biography, Music, United States History

Inside Both Sides of Regime Change in Iraq

 

John Nixon, Debriefing the President:

The Interrogation of Saddam Hussein 

          When Saddam Hussein was captured in Iraq in December 2003, it marked only the second time in the post-World War II era in which the United States had detained and questioned a deposed head of state, the first being Panama’s Manuel Noriega in 1989.  On an American base near Baghdad, CIA intelligence analyst John Nixon led the initial round of questioning of Saddam in December 2003 and January 2004.  In the first two-thirds of Debriefing the President: The Interrogation of Saddam Hussein, Nixon shares some of the insights he gained from his round of questioning  — insights about Saddam himself, his rule, and the consequences of removing him from power.

        Upon return to the United States, Nixon became a regular at meetings on Iraq at the White House and National Security Council, including several with President George W. Bush.   The book’s final third contains Nixon’s account of these meetings, which continued up to the end of the Bush administration. In this portion of the book, Nixon also reflects upon the role of CIA intelligence analysis in the formulation of foreign policy.  Nixon is one of the few individuals — maybe the only individual — who had extensive exposure both to Saddam and to those who drove the decision to remove him from power in 2003.  Nixon thus offers readers of this compact volume a formidable inside perspective on Saddam’s regime and the US mission to change it.

         But while working through Nixon’s account of his meetings with Saddam, I was puzzled by his title, “Debriefing the President,” asking myself, which president? Saddam Hussein had held the title of President of the Republic of Iraq and continued to refer to himself as president after he had been deposed, clinging tenaciously to the idea that he was still head of the Iraqi state. So does the “president” in the title refer to Saddam Hussein or George W. Bush? With the first two-thirds of the book detailing Nixon’s discussions with Saddam, I began to think that the reference was to the former Iraqi leader, which struck me as oddly respectful of a brutal tyrant and war criminal.  But this ambiguity may be Nixon’s way of highlighting one of his major objectives in writing this book.

          Nixon seeks to provide the reading public with a fuller and more nuanced portrait of Saddam Hussein than that which animated US policymakers and prevailed in the media at the time of the US intervention in Iraq, which began fifteen years ago next month.  By detailing the content of his meetings with Saddam to the extent possible – the book contains numerous passages blacked out by CIA censors — Nixon hopes to reveal the man in all his twisted complexity. He recognizes that Saddam killed hundreds of thousands of his own people, launched a fruitless war with Iran and used chemical weapons without compunction.  He “took a proud and very advanced society and ground it into dirt through his misrule” (p.12), Nixon writes, and thus deserves the sobriquet “Butcher of Baghdad.”  But while “tyrant,” “war criminal” and “Butcher of Baghdad” can be useful starting points in understanding Saddam, Nixon seems to be saying, they should not be the end point. “It is vital to know who this man was and what motivated him.  We will surely see his likes again” in the Middle East (p.9), he writes.

          When Nixon returned to the United States after his interviews with Saddam, he was surprised that none of the high-level policy makers he met with seemed interested in the question whether the United States should have removed Saddam from power.  Nixon addresses this question in his final pages with a straightforward and unsparing answer: regime change was a catastrophe for both Iraq and the United States.

* * *

           Nixon began his career as a CIA analyst at in 1998.  Working at CIA Headquarters in Virginia, he became a “leadership analyst” on Iraq, responsible for developing information on Saddam Hussein: “the family connections that helped keep him in power, his tribal ties, his motives and methods, everything that made him tick. It was like putting together a giant jigsaw puzzle with small but important pieces gleaned from clandestine reporting and electronic intercepts” (p.38).  In October 2003, roughly five months after President Bush had famously declared “mission accomplished” in Iraq, Nixon was sent from headquarters to Baghdad.  There, he helped CIA operatives and Army Special Forces target individuals for capture.  At the top of the list was HVT-1, High Value Target Number 1, Saddam Hussein.

           After Saddam was captured in December 2003 at the same farm near his hometown of Tikrit where he had taken refuge in 1959 after a bungled assassination attempt upon the Iraqi prime minister, Nixon confirmed Saddam’s identity.  US officials had assumed that Saddam would “kill himself rather than be captured, or be killed as he tried to escape. When he was captured alive, no one knew what to do” (p.76).  Nixon was surprised that the CIA became the first US agency to meet with Saddam. His team had little time to prepare or coordinate with other agencies with an interest in information from Saddam, particularly the Defense Department and the FBI.  “Everything had to be done on the fly.  We learned a lot from Saddam, but we could have learned a lot more” (p.84-85).

          Nixon’s instructions from Washington were that no coercive techniques were to be used during the meetings.  Saddam was treated, Nixon indicates, in “exemplary fashion – far better than he treated his old enemies.  He got three meals a day.  He was given a Koran and an Arabic translation of the Geneva conventions. He was allowed to pray five times each day according to his Islamic faith” (p.110).   But Nixon and his colleagues had few carrots to offer Saddam in return for his cooperation. Their position was unlike that of a prosecutor who could ask a judge for leniency in sentencing in exchange for cooperation.  Nixon told Saddam that the meetings were “his chance, once and for all, to set the record straight and tell the world who he was” (p.83).  Gradually, Nixon and his colleagues buitl a measure of rapport with Saddam, who clearly enjoyed the meetings as a break from the boredom of captivity.

          Saddam, Nixon found, had  “great charisma” and “an outsize presence. Even as a prisoner who was certain to be executed, he exuded an air of importance” (p.81-82).  He was “remarkably forthright when it suited his purposes. When he felt he was in the clear or had nothing to hide, he spoke freely. He provided interesting insights into the Ba’ath party and his early years, for example. But we spent most of our time chipping away at layers of defense meant to stymie or deceive us, particularly about areas such as his life history, human rights abuse, and WMD, to name just a few” (p.71-72).

         Saddam saw himself as the “personification of Iraq’s greatness and a symbol of its evolution into a modern state,” with a “grand idea of how he fit into Iraq’s history” (p.86).  He was “always answering questions with questions of history, and he would frequently demand to known why we had asked about a certain topic before he would give his answer” (p.100). He often feigned ignorance to test his interrogators knowledge.  He frequently began his answers “by going back to the rule of Saladin.”  Nixon   “often wondered afterward how many people told Saddam Hussein to keep it brief and lived to tell about it” (p.100).

       The meetings revealed to Nixon and his colleagues that the United States had seriously underestimated the degree to which Saddam saw himself as buffeted between his Shia opponents and their Iranian backers on one side, and Sunni extremists such as al-Quada on the other.  Saddam, himself a Sunni who became more religious in the latter stages of his life, could not hide his enmity for Shiite Iran.  He saw Iraq as the “first line of Arab defense against the Persians of Iran and as a Sunni bulwark against its overwhelmingly Shia population” (p.4).  But Saddam considered Sunni fundamentalism to be an even greater threat to his regime than Iraq’s majority Shiites or Iran.

       What made the Sunni fundamentalists, the Wahhabis, so threatening was that they “came from his own Sunni base of support. They would be difficult to root out without alienating the Iraqi tribes, and they could rely on a steady stream of financial support from Saudi Arabia. If the Wahhabists were free to spread their ideology, then his power base would rot from within” (p.124).  Saddam seemed genuinely mystified by the United States’ intervention in Iraq. He considered himself an implacable foe of Islamic extremism, and felt that the 9/11 attacks should have brought his country and the United States closer together.  Moreover, as he mentioned frequently, the United States had supported his country during the Iran-Iraq war.

          The meetings with Saddam further confirmed that in the years leading up to the United States intervention, he had begun to disengage from ruling the country.  At the time hostilities began, he had delegated much of the running of the government to subordinates and was mainly occupied with nongovernmental pursuits, including writing a novel.  Saddam in the winter of 2003 was “not a man bracing for a pulverizing military attack” (p.46), Nixon writes.  In all the sessions, Saddam “never accepted guilt for any of the crimes he was accused of committing, and he frequently responded to questions about human rights abuses by telling us to talk with the commander who had been on the scene” (p.129).

          On the eve of the 1991 Gulf War, President George H.W. Bush had likened Saddam to Hitler, and the idea took hold in the larger American public. But not once during the interviews did Saddam say he admired either Hitler or Stalin.  When Nixon asked which world leaders he most admired, Saddam said de Gaulle, Lenin, Mao and George Washington, because they were founders of political systems and thinkers.  Nixon quotes Saddam as saying, “Stalin does not interest me. He was not a thinker. For me, if a person is not a thinker, I lose interest” (p.165).

          When Nixon told Saddam that he was leaving Iraq to return to Washington, Saddam gave him a firm handshake and told Nixon to be just and fair to him back home.  Nearly three years later, in December 2006, Saddam was put to death by hanging in a “rushed execution in a dark basement” in an Iraqi Ministry (p.270), after the United States caved to Iraqi pressure and turned him over to what turned out to be little more than a Shiite lynch mob.  Nixon concludes that Saddam’s unseemly execution signaled the final collapse of the American mission in Iraq.  Saddam, Nixon writes, was:

not a likeable guy. The more you got to know him, the less you liked him. He had committed horrible crimes against humanity.  But we had come to Iraq saying that we would make things better.  We would bring democracy and the rule of law.  No longer would people be awakened by a threatening knock on the door.  And here we were, allowing Saddam to be hanged in the middle of the night (p.270).

* * *

            Nixon’s experiences with Saddam made him a familiar face at the White House and National Security Council when he returned to the United States in early 2004.  His meetings with President Bush convinced him that Bush never came close to getting a handle on the complexities of the Middle East.  After more than seven years in office, the president “still didn’t understand the region and the fallout from the invasion” (p.212). In Nixon’s view, Bush’s decision to take the country into war was largely because of the purported attempt Saddam had made on his father’s life  in the aftermath of the first Gulf War – a “misguided belief” in Nixon’s view.  The younger Bush and his entourage ordered the invasion of a country “without the slightest clue about the people they would be attacking. Even after Saddam’s capture, the White House was only looking for information that supported its decision to go to war” (p.235).

          One of the ironies of the Iraq War, Nixon contends, was that Saddam Hussein and George W. Bush were alike in many ways:

Both had haughty, imperious demeanors.  Both were fairly ignorant of the outside world and had rarely traveled abroad.  Both tended to see things as black and white, good and bad, for and against, and became uncomfortable when presented with multiple alternatives. Both surrounded themselves with compliant advisers and had little tolerance for dissent. Both prized unanimity, at least when it coalesced behind their own views. Both distrusted expert opinion (p.240).

       Nixon is almost as tough on the rest of the team that surrounded Bush and contributed to the decision to go to war, although he found Vice President Dick Chaney to be a source of caution, providing a measure of good sense to discussions.  Chaney was “professional, dignified, and considerate . . . an attentive listener” (p.197-98).  But he is sharply critical of the CIA Director at the time, George Tenet (even while refraining from mentioning the remark most frequently attributed to his former boss, that the answer to the question whether Saddam was stockpiling weapons of mass destruction was a “slam dunk”).

         In Nixon’s view, Tenet transformed the agency’s intelligence gathering function from one of neutral fact-finding, laying out the best factual assessment possible in a given situation, into an agency whose role was to serve up intelligence reports tailored to support the administration’s positions.  Tenet was “too eager to please the White House.  He encouraged analysts to punch up their reports even when the evidence was flimsy, and he surrounded himself with yes men” (p.225).  Nixon recounts how, prior to the 2003 invasion, the line level Iraq team at the CIA was given three hours to respond to a paper prepared by another agency purporting to show a connection between Saddam’s regime and the 9/11 attacks — a paper the team found “full of holes, inaccuracies, sloppy reporting and pie-in-the-sky analysis” (p.229).  Line level analysts drafted a dissenting note, but its objections were “gutted” by CIA leadership (p.230) and the faulty paper went on to serve as an important basis to justify the invasion of Iraq.

          Nixon left the agency in 2011. But in the latter portion of his book he delivers his fair share of parting shots at the post-Iraq CIA, which has become in his view a “sclerotic organization” (p.256) that “badly needs fixing” (p.261).  The agency’s leadership needs to “stop fostering the notion that the CIA is omniscient” and the broader foreign policy community needs to recognize that intelligence analysts can provide “only information and insights, and can’t serve as a crystal ball to predict the future” (p.261).  But as Nixon fires shots at his former agency, he lauds the line level CIA analysts with whom he worked. The analysts represent the “best and the brightest our country has to offer . . . The American people are well served, and their tax dollars well spent, by employing such exemplary public servants. I can actually say about these folks, ’Where do we get such people?’ and not mean it sarcastically” (p.273-74).

* * *

         Was Saddam worth removing from power, Nixon asks in his conclusion. “As of this writing, I see only negative consequences for the United States from Saddam’s overthrow” (p.257).  No serious Middle East analyst believes that Iraq was a threat to the United States, he argues.  The United States spent trillions of dollars and wasted the lives of thousands of its military men and women “only to end up with a country that is infinitely more chaotic than Saddam’s Ba’athist Iraq” (p.258).  The United States could have avoided this chaos, which has given rise to ISIS and other forms of Islamic extremism, “had it been willing to live with an aging and disengaged Saddam Hussein”(1-2).  Nixon’s conclusion, informed by his opportunity to probe the mindset of both Saddam Hussein and those who determined to remove him from power, rings true today and stings sharply.

Thomas H. Peebles

La Châtaigneraie, France

January 31, 2018

 

 

 

 

5 Comments

Filed under American Politics, Middle Eastern History, United States History

Pledging Allegiance to Stalin and the Soviet Union

Kati Marton, True Believer: Stalin’s Last American Spy 

 Andrew Lownie, Stalin’s Englishman: Guy Burgess, the Cold War, and The Cambridge Spy Ring 

          Spying has frequently been described as the world’s second oldest profession, and it may outrank rank the first as a subject matter that sells books. A substantial portion of the lucrative market for spy literature belongs to imaginative novelists churning out best-selling thrillers whose pages seem to turn themselves – think John Le Carré. Fortunately, there are also intrepid non-fiction writers who sift through evidence and dig deeply into the historical record to produce accounts of the realities of the second oldest profession and its practitioners, as two recently published biographies attest: Kati Marton’s True Believer: Stalin’s Last American Spy, and Andrew Lownie’s Stalin’s Englishman: Guy Burgess, the Cold War, and The Cambridge Spy Ring.

        Bearing similar titles, these works detail the lives of two men who in the tumultuous 1930s chose to spy for the Soviet Union of Joseph Stalin: American Noel Field (1904-1970) and Englishman Guy Burgess (1911-1963). Burgess, the better known of the two, was one of the infamous “Cambridge Five,” five upper class lads who, while studying at Cambridge in the 1930s, became Soviet spies. Field, less likely to be known to general readers, was a graduate of another elite institution, Harvard University. Seven years older than Burgess, he was recruited to spy for the Soviet Union at about the same time, in the mid-1930s.

           While the 1930s and the war that followed were formative periods for both young men, their stories became noteworthy in the Cold War era that followed World War II. Field spent five years in solitary confinement in post-war Budapest, from 1949 to 1954, imprisoned as a traitor to the communist cause after being used by Stalin and Hungarian authorities in a major show trial designed to root out unreliable elements among Hungary’s communist leadership and consolidate Stalin’s power over the country. His imprisonment led to the imprisonment of his wife, brother and informally adopted daughter. Burgess came to international attention in 1951 when he mysteriously fled Britain for Moscow with Donald Maclean, another of the Cambridge Five.  Burgess and Maclean’s whereabouts remained unknown and the source of much speculation until they resurfaced five years later, in 1956.

            Both men came from comfortable but not super-rich backgrounds.  Each lost his father early in life, which unmoored both. After graduating from Harvard and Cambridge with elite diplomas in hand, they even followed similar career paths. Field served in the United States State Department and was recruited during World War II by future CIA Director Allen Dulles to work for the CIA’s predecessor agency, the Office of Strategic Services (OSS), all the while providing information to the Soviet Union. Burgess served in critical periods in the British equivalents, Britain’s Foreign Office and its premier intelligence agencies, M15 and M16, while he too reported to the Soviet Union.  Field worked with refugees during the Spanish Civil War and World War II. Burgess had a critical stint during the war at the BBC.  Both men ended their lives in exile, Field in Budapest, Burgess in Moscow.

          But the two men could not have been more different in personality.  Field was an earnest American with a Quaker background, outwardly projecting rectitude and seriousness, a “sensitive, self-absorbed idealist and dreamer” (M.3), as Marton puts it. Lownie describes Burgess as “outrageous, loud, talkative, indiscreet, irreverent, overtly rebellious” (L.30), a “magnificent manipulator of people and trader in gossip” (L.324).   Burgess was also openly gay and notoriously promiscuous at a time when homosexual conduct carried serious risks.  Field, Marton argues, was never one of Stalin’s master spies. “He lacked both the steel and the polished performance skills of Kim Philby or Alger Hiss” (M.3).  Lownie claims nearly the opposite for Burgess: that he was the “most important of the Cambridge Spies” (L.x).

          Marton’s biography of Field is likely to be the more appealing of the two for general readers. It is more focused, more selective in its use of evidence and substantively tells a more compelling story, raising questions still worth pondering today. Why did Field’s quest for a life of meaning and high-minded service to mankind lead him to become an apologist for one of the 20th century’s most murderous regimes? How could his faith in that regime remain unshaken even after it imprisoned him for five years, along with his wife, brother and informally adopted daughter? There are no easy answers to these questions, but Marton raises them in a way that leads her readers to consider their implications. “True Believer” seems the perfect title for her biography, a study of the psychology of pledging and maintaining allegiance to Stalin’s Soviet Union.

         “Stalin’s Englishman,” by contrast, struck me as an overstatement for Lownie’s work. Most of the book up to Burgess’ defection to Moscow in 1951— which comes at about the book’s three-quarter mark — details his interactions in Britain with a vast array of individuals: Soviet handlers and contacts, British work colleagues, lovers, friends, and acquaintances.  Only in a final chapter does Lownie present his argument that Burgess had an enduring impact in the international espionage game and deserves to be considered the most important of the Cambridge Five.  Lownie’s biography suffers from what young people today term TMI – too much information.  He has uncovered a wealth of written documentation on Burgess and seems bent on using all of it, giving his work a gossipy flavor.  At its core, Lownie’s work is probably best understood as a study of how a flamboyant life style proved compatible with taking the pledge to Stalin and the Soviet Union.

* * *

          As a high school youth, Noel Field said he had two overriding goals in life: “to work for international peace, and to help improve the social conditions of my fellow human beings” (M.14). The introspective young Field initially saw communism and the Soviet Union as his means to implement these high-minded, humanitarian goals. But in a “quest for a life of meaning that went horribly wrong” (M.9), Field evolved into a hard-core Stalinist.  Marton frames her book’s central question as: how does an apparently good man, “who started out with noble intentions” end up sacrificing “his own and his family’s freedom, a promising career, and his country, all for a fatal myth. His is the story of the sometimes terrible consequences of blind faith” (M.1).

         Field was raised in Switzerland, where his father, a well-known, Harvard-educated biologist and outspoken New England pacifist, established a research institute. In secondary school in Zurich, Field was far more introspective and emotionally sensitive than his classmates. He had only one close friend, Herta Vieser, the “plump, blond daughter of a German civil servant” (M.12), whom he subsequently married in 1926.  Field’s father died suddenly of a heart attack at age 53, when Field was 17, shattering the peaceful, well-ordered family life the young man had known up to that time.

         Field failed to find any bearings a year later when he entered Harvard, his father’s alma mater. He knew nothing of America except what he had heard from his father, and at Harvard he was again an outsider among his privileged, callow classmates. But he graduated with full honors after only two years. In his mid-twenties, Marton writes, Field was still a “romantic, idealistic young man” who“put almost total faith in books. He had lived a sheltered, family-centered life” (M.30).

         From Harvard, Field entered the Foreign Service but worked in Washington, at the State Department’s West European Desk, where he performed brilliantly but again did not feel at home, “still in search of deeper fulfillment than any bureaucracy could offer” (M.26). In 1929, he attended an event in New York City sponsored by the Daily Worker, the newspaper of the American Communist Party.  It was a turning point for him.  The “warm, spontaneous fellowship” at the meeting made him think he had realized his childhood dream of “being part of the ‘brotherhood of man’” (M.41). Soviet agents formally recruited Field sometime in 1935, assisted by the persuasive efforts of State Department colleague and friend Alger Hiss.

          For Field, Marton writes, communism was a substitute for his Quaker faith. Like the Quakers, communists “encouraged self-sacrifice on behalf of others.” But the austere Quakers were “no match for the siren song of the Soviet myth: man and society leveled, the promise of a new day for humanity” (M.39-40).  Communism offered a tantalizing dream: “join us to build a new society, a pure, egalitarian utopia to replace the disintegrating capitalist system, a comradely embrace to replace cutthroat competition.”  In embracing communism, Field felt he could “deliver on his long-ago promise to this father to work for world peace” (M.39).

            In 1936, Field left the State Department to take a position in Geneva to work for the League of Nations’ Disarmament Section — and assist the Soviet Union. The following year, he reached another turning point when he participated in the assassination in Switzerland of a “traitor,“ Ignaz Reiss, a battle tested Eastern European Jewish Communist who envisioned exporting the revolution beyond Russia.  Reiss was appalled by the Soviet show trials and executions of 1936-38 and expressed his dismay far too openly for Stalin, making him a marked man. Others may have hatched the plot against Reiss, and still others pulled the trigger, Marton writes, “but Field was prepared to help” (M.246). He had “shown his willingness to do Moscow’s bidding – even as an accessory in a comrade’s murder. He had demonstrated his absolute loyalty to Stalin” (M.68).

            Deeply moved by the Spanish Civil War, Field became involved in efforts to assist victims and opponents of the Franco insurgency.  During the conflict, Field and his wife met a refined German doctor, Wilhelm Glaser, his wife and 17-year old daughter Erica.  A precocious, feisty teenager, Erica was the only member of her high school class who had refused to join her school’s Hitler Youth Group.  She had contracted typhoid fever when her parents met the Fields. With her parents desperate for medical attention for their daughter, the Fields volunteered to take her with them to Switzerland. In what became an informal adoption, Erica lived with Noel and Herta for the next seven years, with the rest of her life intertwined with that of Fields.  After Erica’s initial appearance in the book at about the one-third point, she becomes a central and inspiring character in Marton’s otherwise dispiriting narrative – someone who merits her own biography.

            When France fell to the Nazis in 1940, Field landed a job in Marseilles, France, with the Unitarian Service Committee (USC), a Boston-based humanitarian organization then charged with assisting the thousands of French Jews fleeing the Nazis, along with as many as 30,000 refugees from Spain, Germany, and Nazi-occupied territories of Eastern Europe.  Field’s practice was to prioritize communist refugees for assistance, including many hard-core Stalinists rejected by other relief organizations, hoping to repatriate as many as possible to their own countries “to seed the ground for an eventual postwar Communist takeover” (M.106).  It took a while for the USC to pick up on how Field had transformed it from a humanitarian relief organization into what Marton terms a “Red Aid organization” (M.131).

         After the Germans occupied the rest of France in November 1942, the Fields escaped from Marseilles to Geneva, where they continued to assist refugees and provide special attention to communists whom Noel considered potential leaders in Eastern Europe after the war.  While in Geneva, Field attracted the attention of Allen Dulles, an old family friend from Zurich in the World War I era who had also crossed paths with Field at the State Department in Washington.  Dulles, then head of OSS, wanted Field to use his extensive communist connections to infiltrate Nazi-occupied countries of Eastern Europe. With Field acting as a go-between, the OSS provided communists from Field’s network with financial and logistical support both during and after the war.

        But Field failed to understand that his network was composed largely of communists who had fallen into Stalin’s disfavor. Stalin considered them unreliable, with allegiances that might prioritize their home countries – Poland, East Germany, Hungary or Czechoslovakia – rather than the Soviet Union.  Although Stalin tightened the Soviet grip on these countries in the early Cold War years, he failed to bring Yugoslavia’s independent-minded leader, Marshal Josip Tito, into line.  To make sure that no other communist leaders entertained ideas of independence from the Soviet Union, Stalin targeted a host of Eastern European communists as “Titoists,” which became the highest crime in Stalin’s world — much like being a “Trotskyite” in the 1930s.   Stalin chose Budapest as the place for new round of show trials, analogous to those of 1936-38.

            Back in the United States, in Congressional testimony in 1948, Whittaker Chambers named Field’s long-time friend Alger Hiss as a member of an underground communist cell based in Washington. Hiss categorically denied the allegation and mounted an aggressive counterattack, including a libel suit against Chambers. In the course of defending the suit, Chambers named Field as another communist who had worked at a high level in the State Department.  Field’s double life ended in the aftermath of Chambers’ revelations. He could no longer return to the United States.

            Field’s outing occurred when he was in Prague, seeking a university position after his relief work had ended. From Prague, he was kidnapped and taken to Budapest, where he was interrogated and tortured over his association with Allen Dulles and the CIA.  Like so many loyal communists in the 1930s show trials, Field “confessed” that his rescue of communists during the war was a cover for recruiting for Dulles and the arch-traitor, Tito.   He provided his interrogators with a list of 562 communists he had helped return to Poland, East Germany, Czechoslovakia, and Hungary.  All, Marton writes, “paid with their lives, their freedom, or – the lucky ones — merely their livelihood, for the crime of being ‘Fieldists’” (M.157).  At one point, authorities confronted Field with a man he had never met, a Hungarian national who had previously been a leader within Hungarian communist circles, and ordered Field to accuse the man of being his agent.  Field did so, and the man was later sentenced to death and hanged.

          Hungarian authorities used Field’s “confession” as the centerpiece in a massive 1949 show trial of seven Hungarian communists, including Laslo Rajk, a lifelong communist and top party theoretician who had been Hungary’s Interior Minister and later its Foreign Minister.  All were accused of being “Fieldists,” who had attempted to overthrow the “peoples’ democracy” on behalf of Allen Dulles, the CIA, and Tito.  Field was not tried, nor did he appear as a witness in the trials.  All defendants admitted that Field had spurred them on; all were subsequently executed. By coincidence, Marton’s parents, themselves dissident Hungarian journalists, covered the trial.

           Field was kept in solitary confinement until released in 1954, the year after Stalin’s death. Marton excoriates Field for a public statement he made after his release. “We are not among those,” he declared, “who blame an entire people, a system or a government for the misdeeds of a handful of the overzealous and the misguided,’’ adding her own emphasis to Field’s statement. Field, she writes, thereby exonerated “one of history’s most cruel human experiments, blaming the jailing and slaughter of hundreds of thousands of innocents on a few excessively fervent bad apples” (M.194).

         Field’s wife Herta traveled to Czechoslovakia in the hope of getting information from Czech authorities on her missing husband’s whereabouts. Those authorities handed her over to their Hungarian counterparts, who placed her in solitary confinement in the same jail as her husband, although neither was aware of the other’s presence during her nearly five years of confinement.   When Field’s younger brother Hermann went looking for Field, he was arrested in Warsaw, where he had worked just prior to the outbreak of the war, assisting endangered refugees to immigrate to Great Britain. Herta and Hermann were also released in 1954. Hermann returned to the United States and published a short work about the experience, Trapped in the Cold War: The Ordeal of an American Family.

           Erica Glaser, Field’s unofficially adopted daughter, like Herta and Hermann, went searching for Noel and she too ended up in jail as a result.  Erica had moved to the American zone of occupied Germany after the war, working for the OSS. But she left that job to work for the Communist Party in the Hesse Regional Parliament. There, she met and fell in love with U.S. Army Captain Robert Wallach.  When her party superiors objected to the relationship, Erica broke her connections with the party and the couple moved to Paris. They married in 1948.

          In 1950, Erica decided to search for both Noel and Herta. Using her own Communist Party contacts, Erica was lured to East Berlin, where she was arrested. She was condemned to death by a Soviet military court in Berlin and sent to Moscow’s infamous Lubyanka prison for execution. After Stalin’s death, her death sentence was commuted, but she was shipped to Siberia, where she endured further imprisonment in a Soviet gulag (Marton’s description of Erica’s time in the Gulag reads like Caroline Moorhead’s account of several courageous French women who survived Nazi prison camps in World War II, A Train in Winter, one of the first books reviewed here in 2012).

       Erica was released in October 1955 under an amnesty declared by Nikita Khrushchev, but was unable to join her husband in the United States because of State Department concern over her previous Communist Party affiliations.  Allen Dulles intervened on her behalf to reunite her with her family in 1957.  She finally reached the United States, where she lived with her husband and their children in Virginia’s horse country, an ironic landing point for the fiery former communist.  Erica wrote a book based on her experiences in Europe, Light at Midnight, published in 1967, a clever inversion of Arthur Koestler’s Darkness at Noon.  She lived happily and comfortably in Virginia up to her death in 1993.

            Field spent his remaining years in Hungary after his release in 1954.  He fully supported the Soviet Union’s intervention in the 1956 Hungarian uprising. He stopped paying dues to the Hungarian Communist Party after the Soviets put an end to the “Prague Spring” in 1968, but Marton indicates that there is no evidence that the two events were related.  Field “never criticized the system he served, never showed regret for his role in abetting a murderous dictatorship,” Marton concludes. “At the end, Noel Field was still a willing prisoner of an ideology that had captured him when his youthful ardor ran highest” (M.249).  Field died in Budapest in 1970. His wife Herta died ten years later, in 1980.

* * *

            Much like Noel Field, Guy Burgess, “never felt he belonged. He was an outsider” (L.332), Lownie writes.  But Burgess’ motivation for entry into the world’s second oldest profession was far removed from that of the high-minded Field: “Espionage was simply another instrument in his social revolt, another gesture of self-assertion . . . Guy Burgess sought power and realizing he was unable to achieve that overtly, he chose to do so covertly. He enjoyed intrigue and secrets for they were his currency in exerting power and controlling people” (L.332).

         Burgess’ father and grandfather were military men. His father, an officer in the Royal Navy, was frequently away during Burgess’s earliest years, and the boy depended upon his mother for emotional support and guidance. His father died suddenly of a heart attack when Guy was 13, bringing him still closer to his mother.  Burgess attended Eton College, Britain’s most prestigious “public school,” i.e., upper class boarding school, and from there earned a scholarship to study history at Trinity College, Cambridge. When Burgess arrived in 1930, left-wing radicalism dominated Cambridge.

         Burgess entered Cambridge considering himself a socialist and it was an easy step from there to communism, which appeared to many undergraduates as “attractive and simple, a combination of the best of Christianity and liberal politics” (L.41). Fellow undergraduates Kim Philby and Donald Maclean, whom Burgess met early in his tenure at Cambridge, helped move him toward communism.  Both were recruited to work as agents for the Soviet Union while at Cambridge, and Burgess followed suit in late 1934.  Burgess’ contacts within Britain’s homosexual circles made him an attractive recruit for Soviet intelligence services.

        Before defecting to Moscow, Burgess worked  first as a producer and publicist at the BBC (for a while, alongside fellow Etonian George Orwell), followed by stints as an intelligence officer within both M15 and M16.  He joined the Foreign Office in 1944.  While with the Foreign Office, he was posted to the British Embassy in Washington, where he worked for about nine months.  Philby was his immediate boss in Washington and Burgess lived for a while with Philby’s family. In these positions, Burgess drew attention for his eccentric habits, e.g., constantly chewing garlic; for his slovenly appearance, especially dirty fingernails; and for incessant drinking and smoking — at one point, he was smoking a mind-boggling 60 cigarettes per day.  A Foreign Office colleague’s description was typical: Burgess was a “disagreeable character,” who “presented an unkempt, distinctly unclear appearance . . . his fingernails were always black with dirt. His conversation was no less grimy, laced with obscene jokes and profane language” (L.183). Burgess’ virtues were that he was witty and erudite, often a charming conversationalist, but with a tendency to name-drop and overstate his proximity to powerful government figures.

            Working at the highest levels within Britain’s media, intelligence and foreign policy communities, Burgess frequently seemed on the edge of being dismissed for unprofessional conduct, well before suspicions of his loyalty began to surface.  How Burgess could have remained in these high level positions despite his eccentricities remains somewhat of a mystery.  One answer is that his untethered, indiscreet life-style served as a sort of cover: no one living like that could possibly be a double agent. As one colleague remarked, if he was really working for the Soviets, “surely he wouldn’t act the part of a parlor communist so obviously – with all that communist talk and those filthy clothes and filthy fingernails” (L.167).   Another answer is that he was a member of Britain’s old boy network, at the very top of the English class system, where there was an ingrained tendency not to be too probing or overly judgmental of one’s social peers.  Ben McIntyre emphasizes this point throughout his biography on Philby, reviewed here in June 2016, and Lownie alludes to it in explaining Burgess.

          The book’s real drama starts with Burgess’ sudden defection from Britain to the Soviet Union in 1951 with Donald Maclean, at a time when British authorities had finally caught onto Maclean — but before official doubts about Burgess had crystallized.  Burgess’s Soviet handler told Burgess, who had recently been sent home from the Embassy in Washington after he threatened a Virginia State Trooper who had stopped him for a speeding violation, that he needed to “exfiltrate” Maclean – get him out of Britain.  By leaving himself, Burgess surprised and angered his former boss Philby, who was charged with the British investigation into Maclean’s activities.  Burgess’ defection turned the focus on Philby, who defected himself a decade later.

          The route out of Britain that Maclean and Burgess took remains unclear, as are Burgess’s reasons for accompanying Maclean to the Soviet Union.   The official line was that the departure was nothing more than a “drunken spree by two low-level diplomats,” but the popular press saw the disappearance of the two as a “useful tool to beat the government” (L.264), while of course increasing circulation.  Sometime after his defection, British authorities awoke to the realization that the eccentric Burgess may have been more than just a smooth-talking, chain-smoking drunk.  But they were never able to assemble a solid case against him and did not believe that there would be sufficient evidence to prosecute him should he return to Britain.  In fact, he never did and the issue never had to be faced.

         The two men’s whereabouts remained an international mystery until 1956, when the Soviets staged an outing for a Sunday Times correspondent at a Moscow hotel.  Burgess and Mclean issued a written statement for the correspondent indicating that they had come to Moscow to work for better understanding between the Soviet Union and the West, convinced as they were that neither Britain nor the United States was seriously interested in better relations.   Burgess spent his remaining years in Moscow, where he was lonely and isolated.

        Burgess read voraciously, listened to music, and pursued his promiscuous lifestyle in Moscow, a place where homosexuality was a criminal offense less likely to be overlooked than in Britain.  Burgess clearly missed his former circle of friends in England.  During this period, he took to saying that although he remained a loyal communist, he would prefer to live among British communists. “I don’t like the Russian communists . . . I’m a foreigner here. They don’t understand me on so many matters” (L.315).  Stalin’s Englishman outlasted Stalin by a decade.  Burgess died in Moscow in 1963, at age 52, an adult lifetime of unhealthy living finally catching up with him. He was buried in a Moscow cemetery, the first of the Cambridge Five to go to the grave.

             Throughout the book’s main chapters, Burgess’ impact as a spy gets lost among the descriptions of his excessive smoking, drinking and homosexual trysts.  Burgess passed many documents to the Soviets, Lownie indicates.  Most revealed official British thinking at key points in British-Soviet relations, among them, documents involving the 1938 crisis with Hitler over Czechoslovakia; 1943-45 negotiations with the Soviets over the future of Poland; the Berlin blockade of 1948; and the outbreak of war on the Korean peninsula in 1950.  But there does not seem to be anything comparable to Philby’s cold-blooded revelations of anti-Soviet operations and operatives, leading directly to many deaths; or, for that matter, comparable to Field’s complicity in the Reiss assassination or his denunciation of Hungarian communists.

          In a final chapter, entitled “Summing Up” – which might have been better titled “Why Burgess Matters” – Lownie acknowledges that it is unclear how valuable were the many documents were which Burgess passed to the Soviets:

[E]ven when we know what documentation was taken, we don’t know who saw it, when, and what they did with the material. The irony is that the more explosive the material, the less likely it was to be trusted, as Stalin ad his cohorts couldn’t believe that it wasn’t a plant. Also if it didn’t fit in with Soviet assumptions, then it was ignored (L. 323-24).

          One of Burgess’ most damaging legacies, Lownie argues, was the defection itself, which “undermined Anglo-American intelligence co-operation at least until 1955, and public respect for the institutions of government, including Parliament and the Foreign Office. It also bequeathed a culture of suspicion and mistrust within the Security Services that was still being played out half a century after the 1951 flight” (p.325-26).  Burgess may have been the “most important of the Cambridge spies,” as Lownie claims at the outset, but I was not convinced that the claim was proven in his book.

* * *

            Noel Field and Guy Burgess, highly intelligent and well educated men, were entirely different in character and motivation.  That both chose to live duplicitous lives as practitioners of the world’s second oldest profession is a telling indication of the mesmerizing power which Joseph Stalin and his murderous ideology exerted over the best and brightest of the generation which came of age in the 1930s.

Thomas H. Peebles

La Châtaigneraie, France

December 25, 2017

8 Comments

Filed under British History, Eastern Europe, European History, German History, History, Russian History, Soviet Union, United States History

Using Space to Achieve Control

Mitchell Duneier, Ghetto:

The Invention of a Place, the History of an Idea 

            In 1516, Venice’s ruling authorities, concerned about an influx into the city of Jews who had been expelled from Spain in 1492, created an official Jewish quarter. They termed the quarter “ghetto” because it was situated on a Venetian island that was known for its copper foundry, geto in Venetian dialect. In 1555, Pope Paul IV forced Rome’s Jews into a similarly enclosed section of the city also referred to as the “ghetto.” Gradually, the term began to be applied to distinctly Jewish residential areas across Europe. After World War II in the United States, the term took on a life of its own, applied to African-American communities in cities in the urban North. In Ghetto: The Invention of a Place, the History of an Idea, Mitchell Duneier, professor of sociology at Princeton University, examines the origins and usages of the word “ghetto.”

            The major portion of Duneir’s book explores how the word influenced selected post World War II thinkers in their analyses of discrimination against African-Americans in urban America. While there were a few instances pre-dating World War II of the use of the term ghetto to describe African-American neighborhoods in the United States, it was Nazi treatment of Jews in Europe that gave impetus to this use of the term. By the 1960s, the use of the word ghetto to refer African-American neighborhoods had become commonplace. Today, Duneier writes, the idea of the black ghetto in the United States is “synonymous in the social sciences and public policy discussions with such phrases as ‘segregated housing patterns’ and ‘residential racial segregation’” (p.220).

          Duneier wants us to understand the urban ghetto in the United States as a “space for the intrusive social control of poor blacks” (p.xii).  It is not the result of a natural sorting or Darwinian selection; it is not an illustration of the adage that “birds of a feather flock together.” He discourages any attempt to apply the term to, for example, poor whites, Hispanics or Chinese. The notion of a ghetto, he argues, becomes a “less meaningful concept if it is watered down to simply designate a black neighborhood that varies in degree (but not in kind) from white and ethnic neighborhoods of the city.  .  . Extending the definition to other minority groups . . . carries the cost of obscuring the specific mechanisms by which the white majority has historically used space to achieve power over blacks” (p.224). Duneier shows how, in the decades since World War II, theorists have emphasized different types of external controls over African-American communities: restrictive racial covenants in real estate contracts in the 1940s, precluding the sale of properties to African-Americans; the rise of educational and social welfare bureaucracies in the 1950s, 1960s and 1970s; and, more recently, police and prison control of African-American males resulting from the war on drugs.  But Duneier’s  story, tracing the idea of a ghetto, starts in Europe.

* * *

            In medieval times, Jews in French, English and German speaking lands lived in “semi-voluntary Jewish quarters for reasons of safety as well as communal activity and self-help” (p.4). But Jewish quarters were “almost never obligatory or enclosed until the fifteenth century” (p.5). Jews were always free to come and go and were, in varying degrees, part of the community-at-large. This changed with the expulsion of Jews from Spain in 1492, with many migrating to Italy.  Following the designation of the ghetto in Venice in 1516, Pope Paul IV gave impetus to the rise of separate and unequal Jewish quarters when he issued an infamous Papal Bull in 1555, “Cum nimis absurdum.” In that instrument, the Pope mandated that all Rome’s Jews should live “solely in one and the same place, or if that is not possible, in two or three [places] or as many as are necessary, which are to be contiguous and separated completely from the dwellings of Christians” (p,8).  After centuries of identifying themselves as Romans and enjoying relative freedom of movement, Duneier writes, suddenly Rome’s Jews were forcibly relocated to a small strip of land near the Tiber, “packed into a few dark and narrow streets that were regularly inundated by the flooding river” (p.8).

          This pattern prevailed across Europe during the 17th and 18th centuries, with Jews living in predominantly Jewish quarters in most major cities, some semi-voluntary, others mandatory.  “Isolation in space” (p.7) became part of what it meant to be Jewish.  Napoleon’s war of conquest in Italy in 1797 led to the liberation of Jewish ghettos in Venice, Rome and across the Italian peninsula.  In the 19th century, ghettos began to disappear across Europe. Yet, Rome remained stubbornly resistant. When Napoleon retreated from Italy in 1814, Pope Pius VI almost immediately reinstated the Roman ghetto, sending the Jews back into the “same dank and overcrowded quarter that they had occupied for centuries” (p.12). A product of papal authority, the Roman ghetto was formally and officially abolished with Italian unification in 1870. Thus, Rome’s Jews, among the first in Europe to be confined to a ghetto, “became the last Jews in Western Europe to win the rights of citizenship in their own country” (p.12).

          Duneier perceives a benign aspect to confinement of Jews in traditional ghettos.  The ghetto was a “comfort zone” for often-thriving Jewish communities, a designated area where Jews were required to live but could exercise their faith freely, in a section of the city where they would not face opprobrium from fellow citizens. Jewish communities possessed “internal autonomy and maintained a wide range of religious, educational, and social institutions” (p.11). In Venice and throughout Europe, the ghetto represented a “compromise that legitimized but carefully controlled [Jewish] presence in the city” (p.7). The traditional European ghetto was thus “always a mixed bag. Separation, while creating disadvantages for the Jews, also created conditions in which their institutional life could continue and even blossom” (p.10).

      In the early 20th century, the word ghetto came to refer to high-density neighborhoods inhabited predominantly but voluntarily by Jews. In the United States, the word frequently denoted neighborhoods inhabited not by African-Americans but by Jewish immigrants from Eastern Europe. Then, when the Nazis came to power in Germany in 1933, they gave ominous new meanings to the word ghetto. Privately, Hitler used the word to compare areas of enforced Jewish habitation to zoos, enclosed areas where, as he put it, Jews could “behave as becomes their nature, while the German people look on as one looks at wild animals” (p.14). Publicly, and more politely, Hitler argued that confined Jewish quarters under the Nazis simply replicated the Catholic Church’s treatment of Jews in 19th century Rome.

          But ghettos controlled by the Nazis were more frequently like prisons, surrounded by barbed-wire walls. The Nazi ghetto was a place established with the “express purpose of destroying its inhabitants through violence and brutality” (p.22), a place where the state exercised the “firmest control over its subjects’ lives” (p.220). The Nazis’ virulent anti-Semitism, Duneier concludes, “transformed the ghetto into a means to accomplish economic enslavement, impoverishment, violence, fear, isolation, and overcrowding in the name of racial purity — all with no escape through conversion, and with unprecedented efficiency” (p.22).

* * *

       The fight in World War II against Hitler and Nazi tyranny, in which African-Americans participated in large numbers, understandably had the effect of highlighting the pervasive discrimination that African-Americans faced in the United States. The modern Civil Rights movement came into being in the years following World War II, focused primarily on the Southern United States and its distinctive system of rigid racial separation know as “Jim Crow.” A less visible battle took place in Northern cities, where attention focused on discrimination in employment, education and housing. A small group of sociologists, centered at the University of Chicago, emphasized how African-Americans in nearly all cities in the urban North were confined to distinct neighborhoods characterized by sub-standard housing, neighborhoods that came to be referred to as ghettos.

       Framing the debate in post-war America was the work of Gunnar Myrdal, the Swedish economist who wrote what is now considered the classic analysis of discrimination in the United States, “An American Dilemma.”  Myrdal’s work, based on research conducted during World War II and published in 1944, took on a high level of importance in post-war America.  Myrdal’s research focused principally on the Jim Crow South, where three-fourths of America’s black population then lived.  But Myrdal also advanced what may seem in retrospect like a naïve if idealistic view of Northern racial segregation: it was due primarily to the practice of inserting restrictive covenants into real estate sales contracts, forbidding the selling of property to minorities (along with African-Americans, Jews and Chinese were other groups often excluded by such covenants). Restrictive covenants directed against African-Americans had a component of racial purity that was uncomfortably similar to Nazi practices, most frequently excluding persons with a single great-grandparent of black ancestry. Such clauses, Myrdal argued, were contrary to the basic American creed of equality.  Once white citizens were made aware of the contradiction, they would cease the practice of inserting such restrictions into real estate contracts, and housing patterns would desegregate.

        Myrdal himself rarely used the term ghetto and his treatment of the urban North was “perfunctory by any standard” (p.58). His main contribution was to view Northern segregation not as a natural occurrence, but as a “phenomenon of the majority’s power over a minority population” (p.63). Myrdal’s notion of majority white control over African-American communities influenced the views of two younger African-American sociologists from the University of Chicago, Horace Cayton and St. Clair Drake. In 1945, Cayton and Drake published Black Metropolis, a work that focused on discrimination in Chicago and the urban North but failed to gain the attention that Myrdal’s work had received. Duneier indicates that Myrdal’s analysis of the urban North suffered because he was unable to work out an arrangement with Cayton to use the younger scholar’s copious notes of interviews and firsthand observations of conditions in Chicago’s African-American communities.  

         Cayton and Drake sought to “systematically explain the situation of blacks who had recently moved from the rural South to the urban North” (p.233). They were among the first to use the word ghetto frequently as a description of African-American communities in the North.  The word was for them a “metaphor for both segregation and Caucasian purity in the Nazi era” (p.71-72): blacks who sought to leave, they wrote, encountered the “invisible barbed wire fence of restrictive covenants” (p.72; Duneier’s emphasis). Cayton and Drake considered black confinement to black neighborhoods as permanent and officially sanctioned, unlike Hispanic or Chinese neighborhoods, giving African-American neighborhoods their ghetto-like quality.  For Cayton and Drake, therefore, ghetto was a term used to highlight the differences between African-American communities and other poor neighborhoods throughout the city.

         Echoing the interpretations of traditional European Jewish ghettos discussed above, Cayton and Drake emphasized the “more pleasant aspects of black life that were symbolic of an authentic black identity” (p.69). They argued that racial separation had created a refuge for blacks in a racist world and that blacks had no particular interest in mingling with white people, “having accommodated themselves over time to a dense and separate institutional life – ‘an intricate web of families, cliques, churches, and voluntary associations, ordered by a system of social classes’ – in their own black communities. This life so absorbed them as to make participation in interracial activities feel superfluous” (p.69). Today, Black Metropolis remains a “major inspiration for efforts to understand racial inequality, due to its focus on Northern racism, physical space, and the consequences of racial segregation” (p.79).

            Another protégé of Myrdal, renowned Columbia University sociologist Kenneth Clark, emphasized in the 1950s and 1960s the extent to which external controls of black neighborhoods – absentee landlords and business owners, and school, welfare and public housing bureaucracies – produced a “powerless colony” (p.91). Clark’s 1965 work, Dark Ghetto, which Duneier considers the most important work on the African-American condition in the urban North since Cayton and Drake’s Black Metropolis two decades earlier, argued that the black ghetto was a product of the larger society’s successful “institutionalization of powerlessness” (p.114). Clark looked at segregated residential patterns as just one of several interlocking factors that together produced in ghetto residents a sense of helplessness and suspicion. Others included discrimination in the work place and unequal educational opportunities. Clark thus saw urban ghettos as reenforcing  “vicious cycles occurring within a powerless social, economic, political, and educational landscape” (p.137). Together, theses cycles led to what Clark termed a “tangle of pathologies.”

       For Clark, the traditional Jewish European ghetto bore little resemblance to American realities. Rigid housing segregation was “more meaningfully a North American invention, a manner of existence that had little in common with anything that had come before in Europe or even in the U.S. South” (p.114).  More than any other thinker in Duneier’s study, Clark provided the term ghetto with a distinctly American meaning.  In the 1980s and 1990s, African-American sociologist William Julius Wilson rethought much of the received wisdom that had come from or through Myrdal and Clark.

            Wilson took into account the out-migration of African Americans from inner cities that had begun to gather momentum in the 1970s.  In understanding the plight of those left behind, Wilson argued that class had become a more significant factor than race. African-Americans were dividing into two major classes: a middle class, a “black bourgeoisie,” more and more often living outside the urban core – outside the ghetto – in outlying areas of the city, or in the suburbs, not uncommonly in mixed black-white neighborhoods. The black ghetto remained, concentrating and isolating the least skilled, least educated and least fortunate African-Americans, a “black underclass.”  In contrast to the African-American communities Cayton and Drake had described in the 1940s, those left behind in the 1970s and 1980s saw far fewer black role models they could emulate. A new form of American ghetto had emerged by 1980, Wilson argued, “characterized by geographic, social, and economic isolation. Unlike in previous eras, middle-class and lower-class blacks were having very different life experiences in the 1980s” (p.234).

        Wilson further posited that any neighborhood with 40 percent poverty should be termed a ghetto, thereby blurring the distinction between poor black and poor white or Hispanic neighborhoods.  Assistance program that target poor communities generally, Wilson theorized, were more likely to be approved and implemented than programs targeting only African-American communities.  For the first time since the term ghetto had become part of the analysis of Northern housing patterns in the early post-World War II era, the term was now used without reference to either race or power. With Wilson’s analysis, Duneier contends, the history of the idea of a ghetto in Europe and America “no longer seemed relevant” (p.184).

         Duneier devotes a full chapter to Geoffrey Canada, a charismatic community activist rather than a theorist and scholar. In the 1990s and early 21st century, Canada came to see early education as the key to improving the quality of life in African American neighborhoods – in black ghettos – thereby increasing the range of work and living opportunities for African American youth.  Canada was one of the first to characterize the federal crackdown on drug crime as a tragic mistake, producing alarming rates of black incarceration.  As a result, the country was spending “far more money on prisons than on education” (p.198).

        Two white theorists,  Patrick Moynihan and Oscar Lewis, also figure in Duneier’s analysis. Moynihan, an advisor to presidents Kennedy, Johnson and Nixon, and later Senator from New York, described the black ghetto in terms of broken families. The relatively large number of illegitimate births and a matriarchal family structure in African-American communities, Moynihan argued, held back both black men and women.  Lewis, an anthropologist from the University of Illinois, advanced the notion of a “culture of poverty,” contending that poverty produces a distinct and debilitating mindset that was remarkably similar throughout the world, in advanced and developing countries, in urban and rural areas alike.

         In a final chapter, Duneier summarizes where his key thinkers have led us in our current conception of the term ghetto in the United States. “By the 1960s, an uplifting portrait of the black ghetto became harder to draw. Ever since, those left behind in the black ghetto have had a qualitatively different existence” (p.219). The word now signifies “restriction and impoverishment in a delimited residential space. This emphasis highlights the important point that today’s residential patterns did not come about ‘naturally’; they were promoted by both private and state actions that were often discriminatory and even coercive” (p.220).

* * *

         Duneier has synthesized some of the most important sociological thinking of the post World War II era on discrimination against African Americans, producing a fascinating, useful and timely work.  But Duneier does not spoon feed. The basis for his hypothesis that the links between the traditional Jewish European ghetto and the black American ghetto have gradually faded is not readily gleaned from the text. Similarly, how theorists used the term ghetto in their analyzes of racial discrimination against African Americans seems at times a minor subtheme, overwhelmed by his treatment of the analyzes themselves.  Duneier’s important work thus requires – and merits — a careful reading.

          Thomas H. Peebles

Washington, D. C.

July 27, 2017

 

 

4 Comments

Filed under American Society, European History, United States History

Trial By History

 

 

 

Lawrence Douglas, The Right Wrong Man:

John Demjanjuk and the Last Great Nazi War Crimes Trial 

          Among the cases seeking to bring to justice Nazi war criminals and those who abetted their criminality, that of Ivan Demjanjuk was far and away the most protracted, and perhaps the most confounding as well.  From 1976, up to his death in 2012, a few months short of his 92nd birthday, Demjanjuk was the subject of investigations and legal proceedings, including two lengthy trials, involving his wartime activities after becoming a Nazi prisoner of war. Born in the Ukraine in 1920, Demjanjuk was conscripted into the Red Army in 1941, injured in battle, and taken prisoner by the Nazis in 1942. After the war, he immigrated to the United States, where he settled in Cleveland and worked in a Ford automobile plant, changing his name to John and becoming a US citizen in 1958.

        Demjanjuk’s unexceptional and unobjectionable American immigrant life was disrupted in 1976 when several survivors of the infamous Treblinka prison camp in Eastern Poland identified him as Ivan the Terrible, a notoriously brutal Ukrainian prison guard at Treblinka. In a trial in Jerusalem that began in 1987, an Israeli court found that Demjanjuk was in fact Treblinka’s Ivan and sentenced him to death. But the trial, which began as the most significant Israeli prosecution of a Nazi war criminal since that of Adolph Eichmann in 1961, finished as one of modern history’s most notorious cases of misidentification. In 1993, the Israeli Supreme Court found, based on newly discovered evidence, that Demjanjuk had not been at Treblinka. Rather, the new evidence established that Demjanjuk had served at four other Nazi camps, including 5½ months in 1943 as a prison guard at Sobibor, a camp in Poland, at a time when nearly 30,000 Jews were killed.  In 2009, Demjanjuk went on trial in Munich for crimes committed at Sobibor. The Munich trial court found Demjanjuk guilty in 2011. With an appeal of the trial court’s verdict pending, Demjanjuk died ten months later, in 2012.

        The driving force behind both trials was the Office of Special Investigations (“OSI”), a unit within the Criminal Division of the United States Department of Justice. OSI initiated denaturalization and deportation proceedings (“D & D”) against naturalized Americans suspected of Nazi atrocities, usually on the basis of having provided misleading or incomplete information for entry into the United States (denaturalization and deportation are separate procedures in the United States, before different tribunals and with different legal standards; because no legislation criminalized Nazi atrocities committed during World War II, the ex post facto clause of the U.S. Constitution was considered a bar to post-war prosecutions of such acts in the United States). OSI had just come into existence when it initiated the D & D proceedings against Demjanjuk in 1981 that led to his trial in Israel, and its institutional inexperience contributed to the Israeli court’s misidentification of Demjanjuk as Ivan the Terrible. Twenty years later, in 2001, OSI initiated a second round of D & D proceedings against Demjanjuk for crimes committed at Sobibor.  By this time, OSI had added a handful of professional historians to its staff of lawyers (during my career at the US Department of Justice, I had the opportunity to work with several OSI lawyers and historians).

             In his thought-provoking work, The Right Wrong Man: John Demjanjuk and the Last Great Nazi War Crimes Trial, Lawrence Douglas, a professor of Law, Jurisprudence and Social Thought at Amherst College, aims to sort out and make sense of Demjanjuk’s 35-year legal odyssey, United States-Israel-United States-Germany.  Douglas argues that the expertise of OSI historians was the key to the successful 2011 verdict in Munich, and that the Munich proceedings marked a critical transformation within the German legal system. Although 21st century Germany was otherwise a model of responsible atonement for the still unfathomable crimes committed in the Nazi era, its hidebound legal system had up to that point amassed what Douglas terms a “pitifully thin record” (p.11) in bringing Nazi perpetrators to the bar of justice.  But through a “trial by history,” in which the evidence came from “dusty archives rather than the lived memory of survivors” (p.194), the Munich proceedings demonstrated that German courts could self-correct and learn from past missteps.

         The trial in Munich comprises roughly the second half of Douglas’ book. Douglas traveled to Munich to observe the proceedings, and he provides interesting and valuable sketches of the judges, prosecutors and defense attorneys, along with detail about how German criminal law and procedure adapted to meet the challenges in Demjanjuk’s case.  The man on trial in Munich was a minor cog in the wheel of the Nazi war machine, in many ways the polar opposite of Eichmann. No evidence presented in Munich tied Demjanjuk to specific killings during his service at Sobibor. No evidence demonstrated that Demjanjuk, unlike Ivan the Terrible at Treblinka, had engaged in cruel acts during his Sobibor service. There was not even any evidence that Demjanjuk was a Nazi sympathizer. Yet, based on historical evidence, the Munich court concluded that Demjanjuk had served as an accessory to murder at Sobibor.  The camp’s only purpose was extermination of its population, and its guards contributed to that that purpose. As Douglas emphatically asserts, all Sobibor guards necessarily served as accessories to murder because “that was their job” (p.220).

* * *

            Created in 1979, OSI “represented a critical step toward mastering the legal problems posed by the Nazi next door” (p.10; a reference to Eric Lichtblau’s incisive The Nazi Next Door, reviewed here in October 2015).   But OSI commenced proceedings to denaturalize Demjanjuk before it was sufficiently equipped to handle the task.  In 1993, after Demjanjuk’s acquittal in Jerusalem as Ivan the Terrible, the United States Court of Appeals for the Sixth Circuit severely reproached OSI for its handling of the proceedings that led to Demjanjuk’s extradition to Israel.  The court found that OSI had withheld exculpatory identification evidence, with one judge suggesting that in seeking to extradite Demjanjuk OSI had succumbed to pressure from Jewish advocacy groups .

            The Sixth Circuit’s ruling was several years in the future when Demjanjuk’s trial began in Jerusalem in February 1987, more than a quarter of a century after completion of the Eichmann trial (the Jerusalem proceeding against Eichmann was the subject of Deborah Lipstatadt’s engrossing analysis, The Eichmann Trial, reviewed here in October 2013). The Holocaust survivors who testified at the Eichmann trial had had little or no direct dealing with the defendant. Their purpose was didactic: providing a comprehensive narrative history of the Holocaust from the survivors’ perspective.   The Treblinka survivors who testified at Demjanjuk’s trial a quarter century later older had a more conventional purpose: identification of a perpetrator of criminal acts.

            Five witnesses, including four Treblinka survivors and a guard at the camp, identified Demjanjuk as Ivan the Terrible.   Eliahu Rosenberg, who had previously testified at the Eichmann trial, provided a moment of high drama when he approached Demjanjuk, asked him to remove his glasses, looked him in the eyes and declared in Yiddish, the language of the lost communities in Poland, “This is Ivan. I say unhesitatingly and without the slightest doubt. This is Ivan from the [Treblinka] gas chambers. . . I saw his eyes. I saw those murderous eyes” (p.51). The Israeli court also allowed the Treblinka survivors to describe their encounters with Ivan the Terrible as part of a “larger narrative of surviving Treblinka and the Holocaust” (p.81). The court seemed influenced by the legacy of the Eichmann trial; it acted, Douglas emphasizes, “as if the freedom to tell their story was owed to the survivors” (p.81-82).

            The case against Demjanjuk also rested upon an identification card issued at Trawniki, an SS facility in Poland which prepared specially recruited Soviet POWs to work as accessories, where they provided the SS with “crucial assistance in the extermination of Poland’s Jews, including serving as death camp guards” (p.52). The card contained a photo that unmistakably was of the youthful Demjanjuk (this photo adorns the book’s cover), and accurately reported his date of birth, birthplace, father’s name and identifying features. Trawniki ID 1393 listed Demjanjuk’s service at Sobibor, but not Treblinka. That, Israeli prosecutors explained, was because Sobibor had been his initial assignment at the time of issuance of card.

          Demjanjuk’s defense was that that he had not served at Treblinka, but his testimony was so riddled with holes and contradictions that the three experienced judges of the court – the fact finders in the proceeding; there was no jury – accepted in full the survivors’ testimony and sentenced Demjanjuk to death in 1988.  The death sentence triggered an automatic appeal to the five-judge Israeli Supreme Court (Eichmann was the only other defendant ever sentenced to death by an Israeli court). The appellate hearing did not take place until 1990, and benefitted from a trove of documents released by the Soviet Union during its period of glasnost (openness) prior to its collapse in 1991.

      The Soviet documents contained a “rather complete” (p.94) picture of Demjanjuk’s wartime service, confirming his work as a camp guard at Sobibor and showing that he had also served at three other camps, Okzawm, Majdanek and Flossenberg, but with no mention of service at Treblinka.  Moreover, the Soviet documentation pointed inescapably to another man, Ivan Marchenko, as Treblinka’s Ivan the Terrible. In 1993, six years after the Jerusalem trial had begun, the Israeli Supreme Court issued a 400-page opinion in which it vacated the conviction. Although the court could have remanded the case for consideration of Demjanjuk’s service at other camps, it pointedly refused to do so. Restarting proceedings “does not seem to us reasonable” (p.110), the court concluded.  OSI, however, took a different view.

* * *

            Although Demjanjuk’s US citizenship was restored in 1998, OSI determined that neither his advancing age – he was then nearly 80 – nor his partial exoneration in Jerusalem after protracted proceedings was sufficient to allow him to escape being called to account for his service at Sobibor. Notwithstanding the rebuke from the federal court of appeals for its handling of the initial D & D proceedings, OSI in 2001 instituted another round of proceedings against Demjanjuk, 20 years after the first round. Everyone at OSI, Douglas writes, “recognized the hazards in seeking to denaturalize Demjanjuk a second time. The Demjanjuk disaster continued to cast a long shadow over the unit, marring its otherwise impressive record of success” (p.126). By this time, however, OSI had assembled a team of professional historians who had “redefined our historical understanding of the SS’s process of recruiting and training the auxiliaries who crucially assisted in genocide” (p.126). The work of the OSI historians proved pivotal in the second round of D & D proceedings, which terminated in 2008 with a ruling that Demjanjuk be removed from the United States; and pivotal in persuading a reluctant Germany to request that Demjanjuk be extradited to stand trial in Munich.

            The German criminal justice system at the time of Demjanjuk’s extradition was inherently cautious and rule bound — perhaps the epitome of what a normal legal system should be in normal times and very close to what the victorious Allies would have hoped for in 1945 as they set out to gradually transfer criminal justice authority to the vanquished country. But, as Douglas shows, that system prior to the Demjanjuk trial was poorly equipped to deal with the enormity of the Nazi crimes committed in the name of the German state. Numerous German legal conceptions constituted obstacles to successful prosecutions of former Nazis and their accomplices.

          After Germany regained its sovereignty after World War II and became responsible for its own criminal justice system, it “tenaciously insisted that Nazi atrocities be treated as ordinary crimes, requiring no special courts, procedures, or law to bring their perpetrators to justice” (p.20). Service in a Nazi camp, by itself, did not constitute a crime under German law.  A guard could be tried as an accessory to murder, but only if his acts could be linked to specific killings. There was also the issue of the voluntariness of one’s service in a Nazi camp. The German doctrine of “putative necessity” allowed a defendant to show that he entertained a reasonable belief that he had no choice but to engage in criminal acts.

            In the Munich trial, the prosecution’s case turned “less on specific evidence of what John Demjanjuk did than on historical evidence about what people in Demjanjuk’s position must have done” (p.218) at Sobibor which, like Treblinka, had been a pure exterminaton facility whose only function was to kill its prison population.  With Demjanjuk’s service at Sobibor established beyond dispute, but without evidence that he had “killed with his own hand” (p.218), the prosecution in Munich presented a “full narrative history of how the camp and its guards functioned . . . [through a] comprehensive historical study of Sobibor and its Trawniki-trained guards” (p.219).

          Historical research developed by OSI historians and presented to the Munich court demonstrated that Trawniki guards “categorically ceased to be POWs once they entered Trawniki” (p.226). They were paid and received regular days off, paid home leave and medical care. They were issued firearms and were provided uniforms. The historical evidence thus demonstrated that the difference between the death-camp inmates and the Trawnikis who guarded them was “stark and unequivocal” (p.226).  Far from being “glorified prisoners,” Trawniki-trained guards were “vital and valued assistants in genocide” (p.228). The historical evidence further showed that all guards at Sobibor were “generalists.” They rotated among different functions, such as guarding the camp’s perimeter and managing a “well-rehearsed process of extermination.” All “facilitated the camp’s function: the mass killings of Jews” (p.220).

         Historical evidence further demolished the “putative necessity” defense, in which the defendant entertained a reasonable belief that he would face the direst consequences if he did not participate in the camp’s activities. An “extraordinary research effort was dedicated to exploring the question of duress, and the results were astonishing: historians failed to uncover so much as a single instance in which a German officer or NCO faced ‘dire punishment’ for opting out of genocide” (p.223).  The historical evidence thus provided the foundation for the Munich court to find Demjanjuk guilty as an accessory to murder. He was sentenced to five years imprisonment but released to a Bavarian nursing home pending appeal. Ten months later, on March 17, 2012, he died. Because his appeal was never heard, his lawyer was able to argue that his conviction had no legal effect and that Demjanjuk had died an innocent man.

           The Munich court’s holding that Demjanjuk had been an accessory to murder underscored the value of years of historical research. As Douglas writes:

Without the painstaking archival work and interpretative labors of the OSI’s historians, the court could never have confidently reached its two crucial findings: that in working as a Trawniki at Sobibor, Demjanjuk had necessarily served as an accessory to murder; and that in choosing to remain in service when others chose not to, he acted voluntarily. This “trial by history” enabled the court to master the prosecutorial problem posed by the auxiliary to genocide who operates invisibly in an exterminatory apparatus (p.255-56).

          In the aftermath of Demjanjuk’s conviction, German prosecutors considered charging as many as 30 still living camp guards. One, Oskar Gröning, a former SS guard at Auschwitz, was convicted in 2015, in Lüneburg, near Hamburg.  Gröning admitted in open court that it was “beyond question that I am morally complicit. . . This moral guilt I acknowledge here, before the victims, with regret and humility” (p.258).  Gröning’s trial “would never have been possible without Demjanjuk’s conviction” (p.258), Douglas indicates. Camp guards such Demjanjuk and Gröning were convicted “not because they committed wanton murders, but because they worked in factories of death” (p.260).

* * *

        Thirty years elapsed between Demjanjuk’s initial D & D proceedings in the United States in 1981 and the trial court’s verdict in Munich in 2011. Douglas acknowledges that the decision to seek to denaturalize Demjanjuk a second time and try him in Munich after the spectacularly botched trial in Jerusalem could be seen as prosecutorial overreach.  But despite these misgivings, Douglas strongly supports the Munich verdict: “not because I believe it was vital to punish Demjanjuk, but because the German court delivered a remarkable and just decision, one which few observers would have predicted from Germany’s long legal struggle with the legacy of Nazi genocide” (p.15).   Notwithstanding all the conceptual obstacles created by a legal system that treated the Holocaust as an “ordinary crime,” German courts in Demjanjuk’s case “managed to comprehend the Holocaust as a crime of atrocity” (p.260).  Demjanjuk’s conviction therefore serves as a reminder, Douglas concludes, that the Holocaust was “not accomplished through the acts of Nazi statesmen, SS henchmen, or vicious sociopaths alone. It was [also] made possible by the thousands of lowly foot soldiers of genocide. Through John Demjanjuk, they were at last brought to account” (p.257).

Thomas H. Peebles

Washington, D.C.

July 10, 2017

 

6 Comments

Filed under German History, History, Israeli History, Rule of Law, United States History