100% American?

 

Linda Gordon, The Second Coming of the KKK:

The Ku Klux Klan of the 1920s and the American Political Tradition (Livernight Publishing)

            The Ku Klux Klan, today a symbol of American bigotry, intolerance, and domestic terrorism at its most primitive, had three distinct iterations in United States history.  The original Klan arose in the American South in the late 1860s, in the aftermath of the American Civil War; it was a secret society that utilized intimidation, violence, assassination and other forms of terror to reestablish white supremacy and thwart efforts of recently freed African-American slaves to exercise basic rights.  This iteration of the Klan faded during the following decade, but not before helping to cement the regime of rigid racial segregation that prevailed in the American South for the remainder of the century and beyond.  Then, in the 1950s and 1960s, the Klan resurfaced in the South, again as an organization relying upon violence and intimidation to perpetuate white supremacy and rigid racial segregation, this time in the face of the burgeoning Civil Rights movement of the era. 

         The Ku Klux Klan, today a symbol of American bigotry, intolerance, and domestic terrorism at its most primitive, had three distinct iterations in United States history.  The original Klan arose in the American South in the late 1860s, in the aftermath of the American Civil War; it was a secret society that utilized intimidation, violence, assassination and other forms of terror to reestablish white supremacy and thwart efforts of recently freed African-American slaves to exercise basic rights.  This iteration of the Klan faded during the following decade, but not before helping to cement the regime of rigid racial segregation that prevailed in the American South for the remainder of the century and beyond.  Then, in the 1950s and 1960s, the Klan resurfaced in the South, again as an organization relying upon violence and intimidation to perpetuate white supremacy and rigid racial segregation, this time in the face of the burgeoning Civil Rights movement of the era. 

          In between was the Klan’s second iteration, emerging in the post-World War I 1920s and the subject of Linda Gordon’s The Second Coming of the KKK: The Ku Klux Klan of the 1920s and the American Political Tradition.  Gordon, a prominent American feminist and historian, portrays the 1920s Klan as significantly more complex than its first and third iterations.  Although bigotry and intolerance were still at the heart of the 1920s Klan, it directed its animosity not only at African-Americans but also at Catholics, Jews, and immigrants.  Gordon considers the second Klan to be a reaction to the supposed licentiousness of the “Roaring Twenties” and the rapidly changing social mores of the decade.   With a central mission of purging the country of elements deemed insufficiently “American,” the Klan in the 1920s sought to preserve or restore white Protestant control of American society, which it saw slipping away.

            As the reference to the “American Political Tradition” in the sub-title suggests, much of Gordon’s interpretation consists of an elaboration upon how six distinct American “traditions” came together to give rise to the Klan’s rebirth after World War I: racism, nativism, temperance, fraternalism, Christian evangelicalism, and populism.  She also includes a final section on how, despite ostensible similarities, the Klan differed from the European fascism that came to power in Italy and was bubbling in Germany in the same time frame.  Although it shared with fascist Italy and Nazi Germany a vision for the future based on “racialized nationalism,” the Klan’s nationalism melded racism and ethnic bigotry with evangelical Protestant morality.  The second Klan thus turned its enemies into sinners in a manner that set it apart not only from European fascism but also from the first and third Klan iterations.

            The 1920s Klan was anything but a secretive organization.  It elected hundreds of its members to public office, controlled newspapers and magazines, and boasted of six million members nationally.  It was a fraternal organization with innovative recruitment methods and a decentralized organizational structure, only marginally different from the Rotarians and the Masons.  Whereas the Klan in its first and third iterations was a distinctly southern organization, the 1920s Klan flourished in northern and western states as well as the American South; it was particularly strong in Indiana and Oregon. 

            In Gordon’s interpretation, the Klan in the 1920s further differentiated itself from its first and third iterations by engaging only rarely in what she terms “vigilantism” — overt intimidation and violence.  Readers expecting a gruesome recitation of middle-of-the-night lynchings, the Klan’s trademark form of domestic terrorism, are likely to be disappointed by this volume.  She rarely mentions the term “lynching.”  The primary incident of overt intimidation she highlights is one already familiar to many readers: the Klan’s nighttime assault in 1925 on the Omaha, Nebraska, house of the family of Malcolm Little, later known as Malcolm X.  Klansmen on horseback surrounded the Little house, shattered the windows and forced the family to flee Omaha.  The assault, Gordon indicates, was “typical of the northern Klan’s vigilantism – usually stopping short of murder or physical assault, but nevertheless communicating a credible threat of violence to Klan enemies.  The vast majority of Klanspeople never participated in this vigilantism” (p.94).  

            But what about vigilantism in the South?  Gordon hints at several points that murder and physical violence may have been more extensive in southern states than in the North and West (e.g., vigilantism was the Klan’s “core function” in the South, whereas Klan organizations in the North and West “rarely” engaged in violent attacks; p.206).  But she barely treats the American South, focusing almost exclusively on northern and western states, thereby leaving readers with the sense that they may not have received a full account of the vigilantism of the 1920s Ku Klux Klan, and that a book which delved more deeply into the 1920s Klan in southern states might have been altogether different from this account.

            At least in northern and western states, Gordon argues, the Klan’s views were not out of step with most white American Protestants, the majority group in the United States in the 1920s.  “Never an aberration” in its prejudices, the second iteration of the Klan was, “just as it claimed, ‘100% American’” (p.36).  But in enunciating values with which a majority of white American Protestants of the 1920s probably agreed, the Klan:

whirled these ideas into greater intensity.  The Klan argued that the nation itself was threatened.  Then it declared itself a band of warriors determined to thwart that threat.  In the military metaphors that filled Klan rhetoric, it had been directed by God – a Protestant God, of course – to lead an army of right-minded people to defeat the nation’s internal enemies (p.36). 

* * *

            Antagonism to diversity, a “form of pollution, uncleanliness,” is key to understanding the Klan in the 1920s.  “Fear of heterogeneity” underlay its “extreme nationalism and isolationism; Klanspeople saw little to admire in any foreign culture” (p.58).  The Klan viewed Catholics as threats because their religion was global, making Catholics subservient to Rome and disloyal to America  —  “underground warriors for their foreign masters” (p.45).  The Klan charged Catholics with what amounts to “unfair competition,” alleging that emissaries of the Pope in Rome had helped Catholics “take over police forces, newspapers, and big-city governments” (p.203). 

            Jews were guilty of a different kind of foreign allegiance, to a “secular international cabal of financiers who planned to take over the American economy through its financial institutions” and establish a “government within our government” (p.49).  Jews did not produce anything; they were mere financial middlemen who contributed no economic value to the United States.  The Klan blamed the Jews for the decline in morality, for women’s immodest dress, and for the debasement of the culture coming from Hollywood.   But, “in one remarkable silence about the Jews,”  Klan discourse “did not often employ the reverse side of classic anti-Semitism: that these dishonest merchant capitalists were also Communists” (p.49).  

            Among immigrants, the Klan targeted in particular Mexicans, Japanese, Chinese and East Asians, along with Southern and Eastern Europeans (which of course included many Catholics and Jews).  Exempted were what it termed “Nordic” immigrants, generally Protestants from Germany, the Scandinavian countries and the British Isles.  The Klan argued “not only for an end to the immigration of non-‘Nordics’ but also for deporting those already here.  The date of their immigration, their longevity in the United States, mattered not” (p.27).  No matter how long such immigrants remained in the country, they could never become fully American.

            With rites based on Bible readings and prayer, the second Klan’s religiosity “might suggest that it functioned as a Protestant denomination.”  But the Klan was “not a denomination,” Gordon writes.  It sought to “incorporate existing Protestant churches, not replace them, and to put evangelism at their core.  It was in many ways a pan-Protestant evangelical movement, that is, an attempt to unite evangelical Protestants across their separate denominations” (p.88).  The Klan relied heavily upon evangelical ministers for recruitment, a mobilization that “foreshadowed – and probably helped generate – the entry of Christian Right preachers into conservative politics fifty years later” (p.90).  The 1920s “may have been the first time that bigotry became a major theme among [evangelical Protestant] preachers” (p.91).   

            The Klan joined enthusiastically with evangelical Protestants to support Prohibition, the anti-alcohol movement that succeeded in enshrining temperance into the American constitution in the form of the constitution’s 18th amendment.  For a full 14 years, from 1919 to 1933, the Klan theoretically had constitutional sanction for its vision of a world without alcoholic beverages.  Defense of Prohibition was universal among the Klan’s diverse chapters, and in Gordon’s view was “arguably responsible for the fact that many relatively tolerant citizens shrugged off its racist rhetoric” (p.95).  Supporting Prohibition, the Klan blamed its enemies for violations.  In Klannish imagination, “Catholics did the drinking and Jewish bootleggers supplied them” (p.58).

            The Klan also joined with many women’s groups in supporting Prohibition.  Klanswomen formed a parallel organization, Women of the Ku Klux Klan (WKKK), which Gordon finds close in outlook and approach to the Women’s Christian Temperance Union, one of the major groups backing the 18th amendment.  The WKKK supported woman’s suffrage – for white, Protestant women.  Klanswomen also supported women’s employment and even called for women’s economic independence.  Although outnumbered about 6 to 1 in the Klan, women contributed a new argument to the cause: that women’s emergence as active citizens would help purify the country, bringing “family values” back into the nation’s governance.  Women engaged in charitable work on behalf of the Klan, raised money for orphanages, schools and individual needy families, and placed Protestant bibles in the schools.  Women often led youth divisions of the Klan.  Without women’s long hours invested in Klan activties, Gordon argues, the second Klan “could not have become such a mass movement” (p.129). 

            But, in an organization based on male hierarchy which played specifically to white Protestant males’ anxiety over loss of privileged status in the new and unsettling post-World War I years, many women rose to national prominence as leaders of the Klan’s second coming.  Perhaps the most striking characteristic of such women was their “entrepreneurship,” which involved “both ambition and skill, both principle and profit . . . Experienced at organizing large events, state-of-the-art in managing money, unafraid to attract publicity, they were thoroughly modern women” (p.122-23).  Gordon seems unsure how to present these strong, assertive women who freely embraced the Klan creed of bigotry and intolerance.   The Klanswomen’s activism “requires a more capacious understanding of feminism,” she writes.  Their “combination of feminism and bigotry may be disturbing to today’s feminists, but it is important to feminism’s history.  There is nothing about a generic commitment to sex equality that inevitably includes a commitment to equalities across racial, ethnic, religious or class lines” (p.123).  At another point, she admonishes readers to “rid themselves of notions that women’s politics are always kinder, gentler, and less racist than men’s” (p.110).

            In its economic values, the Klan was wholly conservative.  It was devoted to the business ethic and revered men of great wealth, with its economic complaints invariably taking the form of “racial and religious prejudices”  (p.203).  The Klan sought to implement its vision of a white Protestant America “without fundamental changes to the political rules of American democracy.  The KKK was a political machine and a social movement, not an insurrectionary vanguard” (p.208).   What made the Ku Klux Klan so successful in the early 1920s was an aggressive, state-of-the-art approach to recruiting:

Far from rejecting commercialization and the technology it brought, such as radio, the Klan’s system was entirely up-to-date, even pioneering, in its methods of selling.  From its start, the second Klan used what might be called the social media of its time.  These methods – a professional PR firm, financial incentives to recruit, advertisements in the mass media, and high-tech spectacular pageants – produced phenomenal growth for several years (p.63).

            The Klan in its second iteration faded quickly, beginning around 1925.  By 1927 Klan membership had shrunk to about 350,000.  Several highly publicized scandals and cases of criminal embezzlement among Klan leaders, exposing its leaders’ crimes, hypocrisy, and misbehavior, induced the Klan’s precipitous fall in the latter portion of the 1920s, along with the “profiteering” of Klan leaders — “gouging members through dues and the sale of Klan paraphernalia” (p.191).  Power struggles among leaders produced splits and even rival Klans under different names.  Rank-and-file resentment transformed the Klan’s already high turnover into “mass shrinkage as millions of members either failed to pay dues or formally withdrew” (p.191). 

            But the longest-term force behind the Klan’s decline may have been the increasing integration of Catholics and Jews into American society.  The “allegedly inassimilable Jews assimilated and influenced the culture, both high-brow and low-brow.  The alleged vassals of the pope began to behave like other immigrants, firm in their allegiance to America” (p.197).   By contrast, the Klan “never gave up its hatred for people of color.  As African-Americans moved northward and westward, as more Latin American and East Asian immigrants arrived, the latter-day Klan shifted toward a simpler, purer racial system, with two categories: white and not white” (p.197-98).

* * *

            Despite its precipitous decline, the Ku Klux Klan in its second iteration triumphed in many respects.  The biggest tangible Klan victory was in legislation restricting immigration.  Although the Klan was not solely responsible, its propaganda “surely strengthened racialized anti-immigrant sentiment both in Congress and among the voters” (p.195).  Less tangibly, the Klan “influenced the public conversation, the universe of tolerable discourse” (p.195).  The second Klan “spread, strengthened, and radicalized preexisting nativist and racist sentiments among the white population.  In reactivating these older animosities it also re-legitimated them.  However reprehensible hidden bigotry might be, making its open expression acceptable has significant additional impact” (p.195).   In this sense, Gordon’s compact and captivating interpretation serves as a reminder that the Klan remains a presence still to be reckoned with today, nearly a century after its second coming. 

Thomas H. Peebles

Bordeaux, France

January 28, 2019

Advertisements

6 Comments

Filed under United States History

A Tale of Three Cities’ Spaces and Places

Mike Rapport, The Unruly City:

Paris, London and New York in the Age of Revolution

In The Unruly City: Paris, London and New York in the Age of Revolution, Mike Rapport, professor of modern European history at Scotland’s University of Glasgow, provides a novel look at three urban centers in the last quarter of the 18th century: Paris, London and New York.  As the title indicates, the century’s last quarter was the age of revolution: in America at the beginning of the approximate 25-year period, as the 13 American colonies fought for their independence from Great Britain and became the United States of America; followed by the French Revolution in the next decade, which ended monarchial rule, abolished most privileges of the aristocracy and clergy, and uprooted deep-rooted social and cultural norms.  Great Britain somehow avoided any such an upheaval during this time, and that is one of the main points of the story. 

But radical democratic movements were afoot in all three countries, favoring greater equality, a drastically expanded franchise and opposition to entrenched privilege  – objectives overlapping with but not identical to those of the revolutions in America and France.  How these democratic impulses played out in each city is the real core of Rapport’s story — or, more precisely, how these impulses played out in each city’s spaces and places.  In examining the contribution of each city’s topography – its spaces and places — to political outcomes, Rapport utilizes a “bottom up” approach which emphasizes the roles played by each city’s artisans, small shopkeepers, and everyday working people as they struggled against entrenched elites.  Rapport thus brings the perspective of an urban geographer and demographer to his story.  But there is also a geo-political angle that needs to be factored into the story. 

The French and Indian War, also known as the Seven Years War, in which France and its archenemy Britain vied between 1756 and 1763 for control of large swaths of the American continent, ended in ignominious defeat for France.  But both Britain and France emerged from the war with staggeringly high debts, triggering financial crises in both countries.  A decade and a half later, in 1777, monarchial France lent assistance to the American colonies as they broke away from Britain.  The newly formed United States of America in turn largely supported the French Revolution when it broke out in 1789, and sided with revolutionary France when it found itself again at war with Britain in 1792.  Rapport’s topographical approach, with its concentration on the cityscapes of Paris, London and New York, provides a fresh perspective to these familiar late 18th century events.

In the final quarter of the 18th century, Paris and London were sprawling nerve centers of venerable, centuries-old civilizations, while New York was far smaller, far younger, and not quite the nerve center of an emerging New World civilization.  In 1790, moreover, in the middle of Rapport’s story, New York lost its short-lived position as the political capital of the newly created United States of America.  But Paris was different from both New York and London in ways that are consequential for this multi-layered, complex and ambitious tale of three cities. 

Although France’s revolution was nation-wide, its course was dictated by events in Paris in a manner altogether different from the way the American Revolution unfolded in New York.  France in the last quarter of the 18th century lived under a monarchy described alternatively as “despotic” and “absolute.”  It benefitted from nothing quite comparable to America and Britain’s shared heritage from England’s 1688 “Glorious Revolution,” which established critical individual rights and checks upon monarchial power, all of which were “jealously defended by British subjects on both sides of the Atlantic and enviously coveted by educated, progressive Frenchmen and -women” (p.xv). Democratic radicalism in France thus had an altogether different starting point from that in America or Britain, one of the reasons radicalism fused with revolutionary fervor in France in a way it never did in either America or Britain.  These divergences between France on the one hand and America and Britain on the other help explain why Rapport’s emphasis on urban spaces and places serving political ends works best in Paris.   

Rapport resolutely links phases of the French Revolution to discrete Parisian spaces and places: giving impetus to the revolution’s early stages were the Palais-Royal, a formerly aristocratic enclave on the Right Bank, and the artisanal district of the Faubourg St. Antoine, located just east of the hulking Bastille fortress; Paris’ central market, Les Halles, and the Cordeliers district, centered around today’s Placed de l’Odéon on the Left Bank, sustained the revolution’s more radical stages.  The distinct character of these sections of Paris, Rapport writes, goes “a long way to explain how the events unfolded and where much of the revolutionary impulse came from.”  Their geographical and social makeup made Paris a “truly revolutionary city, with a popular militancy that kept politics on the boil with each new crisis.  This combination of geography, social structure and political activism distinguished the Parisian experience from that of London and New York” (p.202). 

When he moves from revolutionary Paris to New York and London, Rapport’s urban topographical approach seems comparatively flat and somewhat forced.  He shows how New York’s Common, located near the city’s northern limits in today’s lower Manhattan, became the focal point for the city’s’ rising democratic fervor and its resistance to British rule.  In London, he focuses upon St. George’s Field, functionally similar to New York’s Common as a location where large groups from all walks of life and all parts of the metropolis gathered freely.  St. George’s Field, which today encompasses Waterloo Station, became the center of mass demonstrations in support of democratic radical John Wilkes, who was jailed for seditious libel in a prison overlooking this largely undeveloped, semi-rural expanse.   But the most compelling story for New York and London is how the democratic energy in the two cities stopped short of the thorough social and cultural uprooting of the French Revolution, much to the relief of elites in both cities.     

* * *

By the fateful year of 1789, Paris’ Palais-Royal, then an “elegant complex of colonnades, arcades, gardens, fountains, apartments, theatres, offices and boutiques” (p.127), had become a combative pubic gathering place where journalists and orators “intellectually pummeled, ideologically bludgeoned and rhetorically battered the old order” (p.125).  Questions involving royal despotism and the rights of citizens were debated and discussed across Paris and throughout France, but “nowhere did these great questions generate more white hot fervor than in the Palais-Royal”(p.127).  The Palais-Royal gave political voice to the insurrection against the monarchy and inherited privilege that broke out in Paris in the spring of 1789 and spread nation-wide.  Without the “contentious cauldron” of the Palais-Royal, Rapport concludes, it is “hard to imagine the insurrection unfolding as it did – and even having the revolutionary results that it did” (p.145),

The Faubourg St. Antoine contributed “special vigor” (p.126) to the 1789 uprising, which resulted in a transfer of power from the King to an elected chamber, the National Assembly, and the subsequent July 1789 assault on the Bastille. An artisanal district famous for its furniture and cabinet makers, Faubourg St. Antoine’s topography and location, Rapport writes, made the neighborhood “especially militant” (p.137) because it was conscious of being outside the old limits of the city.  There was nothing in either New York or London to match the Faubourg’s “geographical cohesion, its homogeneity, its separateness and its defensiveness” (p.137). In Faubourg St. Antoine, a political uprising became a social and cultural upheaval as well.  As “bricks and mortar places,” Rapport writes, both the Palais-Royal and the Faubourg St. Antoine had a “material impact on the shape and outcome of events” and played outsized roles in marking the “final crisis of the old order” (p.126),

As the revolution became more radical, the central market of Les Halles, “the belly of Paris,” also played an outsized role.  Les Halles was the largest and most popular of several Parisian markets.  Its particular culture and geographic location gave Les Halles a “revolutionary dynamism” (p.177) that bound together those who lived and worked there, especially women.  A coordinated women’s march, fueled by food shortages throughout Paris, emanated from Faubourg St. Antoine and Les Halles in October 1789.  The march ended in Versailles, where the women invaded the National Assembly and gained an audience with King Louis XVI.  The King agreed to give his royal sanction to a series of revolutionary demands and, more to the point, promised that Paris would be supplied with bread.  Later the same day, the women forced the King and his family to return to Paris, where they lived as virtual hostages in a city whose women had “demonstrated their determination to keep the Revolution on track” (p.183).

In the aftermath of the march, the National Assembly, instilled by fear of the “unpredictable, uncontrollable force of popular insurrection” (p.185-86), restricted the vote to “active” citizens, adult males who paid a set level of taxes, only about one-half of France’s male population.  The subsequent move to expand the franchise in 1789-90 originated in the Cordeliers district, an “effervescent combination of an already articulate, politicized artisanal population, combined with the concentration of a sympathetic radical leadership” (p.188).  After Lucille and Camille Desmoulins, husband-and-wife journalists from the district, wrote an important article in which they attacked the restriction of the franchise – “What is this much repeated word active citizen supposed to mean?  The active citizens are the ones who took the Bastille” (p.190) – the Cordelier district assembly in June 1790 proposed that all males who paid “any tax whatsoever, including indirect taxes, which included just about everybody, should have ‘active’ citizenship” (p.188-89; notwithstanding the thorough uprooting of the French Revolution, there was no move to extend the franchise to women).   

The Cordeliers district narrowed the political divide between social classes in no small part because of the Society of the Friends of the Rights of Man and the Citizen, founded in the heart of the district.  Made up of merchants, artisans, tradesmen, retailers and radical lawyers, the Society also encouraged women to attend its sessions.  It saw its primary purpose as “rooting out the threats to the Revolution” and “challenging the limits placed on political rights by the emerging constitutional order” (p.191).  Its influence “rested in its distinctly metropolitan reach” and in having its roots in a neighborhood whose “social and political character made it a linchpin binding the axle of middle-class radicalism to the wheels of popular revolutionary activism” (p.195-96).  As the revolution entered its most radical phases, the Cordeliers district proved to be “one of the epicenters of the metropolitan outburst,” unlike any other district in Paris, bridging the “social gap between the radical middle-class leadership of the burgeoning democratic movement and the militants of the city’s working population” (p.195).    

            No specific Parisian neighborhoods are linked to the turn that the Revolution took in 1793-94 known as the Terror, “synonymous with the ghastly mechanics of the guillotine” (p.223).  This phase occurred at a time of multiple crises, when the newly declared French Republic grasped at repressive and draconian means to defend itself.  Driven by the  “blunt, direct and violent”  (p.226-27) radicals who called themselves sans-culottes (literally, those “without breeches”), the Terror was the period that saw King Louis XIV and Marie Antoinette executed, followed by a chilling string of prominent figures deemed “enemies of the revolution” (among them prior revolutionary leaders Maximilian Robespierre and Georges-Jacques Danton, along with Cordelier journalist Camille Desmoulins).  Rapport’s chilling chapter on this phase serves as a reminder of the perils of excessive revolutionary zeal.

Throughout the Revolution, all sections of Paris felt its physical effects in the adaptation of buildings for the multitude of institutions of the new civic order. The process of taking over buildings in every quarter of Paris  — churches, offices, barracks and mansions — not only “made the Revolution more visible, indeed more intrusive, than ever before, but also represented the physical advance of the revolutionary organs deeper in the neighborhoods and communities of the capital” (p.226).  The “physical transformation of interiors, the adaptation of internal spaces and the embellishment of the buildings with revolutionary symbols, reflected the radicalism of the French Revolution in constructing an egalitarian order in an environment that had grown organically out of corporate society based on privilege and royal absolutism” (p.310).  In New York, the physical transformation of the city was not so thoroughgoing, “since the American Revolution did not constitute quite such a break with the past” (p.171).     

* * *

New York in the late 18th century was already an important business center, the major gateway into the New World for trade and commerce from abroad, with a handful of powerful, well-connected families dominating the city’s politics.  Although its population was a modest 30,000, diminutive in comparison to London and Paris, it was among the world’s most heterogeneous cites.  In its revolutionary years, New York witnessed what Rapport terms a “dual revolution,” both a “broad coalition of colonists against British rule” and a “revolt of the people against the elites,” which blended “imperial, local and popular politics in an explosive mix” (p.2). The contest between the “people of property” and the “mob” was about the “future forms of government, whether it should be founded upon aristocratic or democratic principles” (p. p.28-29), in the words of a future New York Senator.

The tumultuous period that ended with independence in 1783 began when Britain sought to raise money to pay for the Seven Years War through the Stamp Act of 1765, which imposed a duty on all legal documents (e.g., deeds, wills, licenses, contracts), the first direct tax Britain had imposed on its American colonies.  Triggered by resistance to the Stamp Act, the dual American revolution in the years leading up to war between the colonies and Britain moved in New York from sites controlled by the city’s elites, especially the debating chambers of City Hall, to sites more accessible to the public, in particular the open space known as the Common, along with the city’s taverns and the streets themselves. 

More than just a public space, the Common was “also a site where the power of the state, in all its ominous brutality, was on display” (p.18).  Barracks to house British troops had been erected on the Common during the Seven Years War, and it was the site of public executions.  It was on the Commons that the Liberty Pole, the mast of a pine ship, was erected and became the city’s most conspicuous symbol of resistance, a “deliberate, defiant counterpoise” (p.18) to British state authority.  The first Liberty Pole was hacked down in August 1766, only to be replaced in the following days.  This pattern repeated itself several times, as the Common became the most politically charged place in New York, where a more militant, popular form of politics emerged to challenge the ruling elites.

  It was on the Common, at the foot of the Liberty Pole that New Yorkers received the news in April 1775 that war with the British had broken out in New England.  In 1776, George Washington announced the promulgation of the Declaration of Independence on the same site. During the war for independence, the Liberty Pole became the symbolic site where people declared their support for independence – or, in many cases, were compelled to do so.     

In 1789, after the American colonies had won independence from Britain, the Common served as the start and end point of a massive parade through New York City in support of a proposed constitution to govern the country now known as the United States of America, at a time when the entire State of New York was wrestling with the decision whether to become the last state to ratify the proposed constitution.  The choice of the Common as the parade’s start and end point was, Rapport writes, highly symbolic, “connecting the struggle for the Constitution with the earlier battles around the Liberty Pole” (p.162).  Dominated by the city’s tradesmen and craft workers, the parade was a “tour of artisanal force” that “connected the Constitution with the commercial prosperity upon which the city and its working people depended,” serving as a reminder to the city’s elites that the revolution had “not just secured independence, but [had also] mobilized and empowered the people”(p.163).

The parade from the Common through New York’s streets also demonstrated the degree to which democratic radicalism in New York had been tempered.  The city’s radicals, aware that New York’s prosperity depended upon good commercial relations and a thriving mercantile community, “reached beyond mere vengeance and aimed at forging a more equal democracy, in which the overmighty power of the wealthy and the privileged would be cut down to size, allowing artisans and ‘mechanics’ to enjoy the democratic freedoms that they had done so much to secure” (p.156). 

With their vested interest in the financial and commercial prosperity of the city, New York’s radicals were not yet ready to call for “leveling,” or “social equality,” among the greatest concerns to the city’s privileged classes.  In London, too, democratic radicalism stopped short of a full-scale challenge to the social order. 

* * *

While Britain was attempting to rein in America’s rebellious colonies, a movement for democratic reform emerged in London, centered on parliamentary reform and expansion of the suffrage.  The movement’s unlikely leader was journalist and parliamentarian John Wilkes, who symbolized “defiance towards the elites and the overbearing authority of the eighteenth-century British state” (p.35).  The liberties that Wilkes defended began with those specific to the City, a small and nearly autonomous enclave within metropolitan London.  Known today as London’s financial district, the City in the latter half of the 18th century was a “lively hub of activity of all kinds, not just finance but also highly skilled artisans, printers and merchants plying their trades” (p.37).  It had its own police constables and enjoyed privileges unavailable elsewhere in London, including direct access to King and parliament. 

Wilkes, writing “inflammatory satire,” excoriated the government and campaigned for an expansion of voting rights with a mixture of “irony, humor and vitriol” (p.42).  Wilkes tied his in-your-face radicalism to a defense of the traditional liberties and power of the City.  But his radicalism caused him to be expelled from the House of Commons, then tried and convicted of seditious libel.  For London’s working people, Wilkes became “another victim of a harsh, unforgiving system that seemed staked in favor of the elites” (p.51).  Wilkes was jailed in a prison that overlooked St. George’s Fields, London’s undeveloped, semi-rural gathering point on the opposite side of the River Thames from the City.  St. George’s Fields came to represent symbolically a “departure from the narrow defense of the City’s privileges towards a broader demand for a national politics more responsive to the aspirations of the people at large” (p.44).   

When authorities failed to release Wilkes on an anticipated date in 1768, a major riot broke out in St. George’s Fields in which seven people were killed.  Mobilization on St. George’s Fields on behalf of Wilkes, Rapport writes,  “brought thousands of London’s working population into politics for the first time, people who had little or no stake in the traditional liberties of the City, let alone a vote in parliamentary elections, but who saw in Wilkes’s defiance of authority a mirror of their own daily struggle for self-respect and dignity in the face of the overbearing power of the state and the social dominance of the elites” (p.44).

Once freed, Wilkes went on to be elected Lord Mayor of the City in 1774 and chosen also to represent suburban Middlesex in Parliament.  Two years later he was pushing the altogether radical notion of universal male suffrage. But, rather than attacking the privileges of the City, the movement in support of Wilkes fused with a defense of the City.  This fusion, in Rapport’s view, “may be one reason the resistance to authority in London, though certainly riotous, did not become revolutionary . . . Londoners were able to make their protests without challenging the wider structure of politics” (p.52-53).   By coalescing around the figure of John Wilkes, the popular mobilization “reinforced rather than challenged the privileges that empowered the City to resist the king and Parliament” (p.56).

As revolution raged on the other side of the English Channel after 1789, many in London believed that that Britain’s 1688 revolution “had already secured many basic rights and freedoms for British subjects; the French were starting from zero” (p.257).  Arguments about the French revolution and criticisms and defense of the British constitution were kept within legal boundaries in London.  It was the British habit of free discussion, Rapport concludes, “alongside, first, the commitment to legality among the reformers and, second, the relative caution with which . . . the government proceeded against them that ensured that London avoided a revolutionary upheaval in these years” (p.221).

* * *

Rapport sets a dauntingly intricate task for himself in seeking to demonstrate how the artisanal and working class populations of Paris, New York and London used each city’s spaces and places to abet radical democratic ideas.   How those spaces and places helped shape revolutionary events in Paris from 1789 onward and thereby transformed the city are the best portions of his work, insightful and at times riveting.  His treatment of New York and London, where no such physical transformation occurred, has less zest.  But the tale of three cities comes together through Rapport’s detailing of moments in each place when “thousands of people, often for the first time, seized the initiative and tried to shape their own political futures” (p.317).

* * *

Thomas H. Peebles

Washington, D.C. USA

December 31, 2018

2 Comments

Filed under British History, French History, History, United States History

New Thinking in the Islamic Heartlands

 

 

Christopher de Bellaigue, The Islamic Enlightenment:

The Islamic Enlightenment:

The Struggle Between Faith and Reason, 1798 to Modern Times 

          Christopher de Bellaigue is one of the leading English-language authorities on the volatile Middle East, an elegant stylist with an uncanny ability to explain that bewildering swath of the globe in incisive yet clear prose.  Heis the author of a perceptive biography of Muhammad Mossadegh, the Iranian Prime Minister deposed in 1953 in a joint American-British covert operation, reviewed here in October 2014.  De Bellaigue’s most recent work, The Islamic Enlightenment: The StruggleBetween Faith and Reason, 1798 to Modern Times, tackles head-on the widespread notion that Islam, the Middle East’s dominant religion, needs an intellectual, secular awakening similar to the 18th century Enlightenment which transformed Western society.  De Bellaigue delivers the message forthrightly that Islam has already undergone such a transformation.  Those who urge Enlightenment on Islam, non-Muslims and Muslims alike, are “opening thedoor on a horse that bolted long ago” (p.xvi; disclosure: I have argued in these pages that Islam needs  a 21st century Enlightenment).  

          For the past two centuries, de Bellaigue writes, Islam has been undergoing a “pained yet exhilarating transformation – a Reformation, an Enlightenment and an Industrial Revolution all at once,” an experience of “relentless yet vitalizing alternation – of reforms, reactions, innovations, discoveries, and betrayals” (p.xvi).  The Islamic Enlightenment, like its Western counterpart, entailed the “defeat of dogma by proven knowledge, the demotion of the clergy from their position as arbiters of society and the relegation of religion to the private sphere,” along with the “ascendancy of democratic principles and the emergence of the individual to challenge the collective to which he or she belongs” (p.xxiv).  Although influenced and inspired by the West, the Islamic Enlightenment found its own forms.  It did not follow the same path as the European version.

          De Bellaigue concentrates almost exclusively on three distinct Islamic civilizations, Egypt, Iran (called “Persia” up to 1935, although de Bellaigue uses the word “Iran” throughout), and the Turkish Ottoman Empire.  These three civilizations constitute “Islam’s heartlands” (p.xxvi), the three most consequential intellectual, spiritual and political centers of the Middle East.  Although he barely mentions such major Islamic areas as North Africa or East Asia, there is logic and symmetry to de Belliague’s choices, starting with a different language in each: Arabic in Egypt; Persian (or Farsi) in Iran; and Turkish in Ottoman Turkey.  Egypt and Iran, moreover, represent full-strength versions of Sunni and Shiite Islam, respectively, whereas the Sunni Islam of the Ottoman Empire interacted with Christianity as the empire extended its suzerainty well into Europe.

          The Islamic Enlightenment had a clear starting point in de Bellaigue’s account: Napoleon Bonaparte’s invasion of Egypt in 1798, in which the Corsican general brought with him not only several thousand military troops bent upon conquest but also the transforming ideas of the French Revolution and the French Enlightenment.   The French occupation was short-lived.  The British dislodged the over-extended Napoleon from Egypt in 1802 and retained a foothold there that became full colonial domination in the latter part of the 19th century.  But the transformative power of the new ways of thinking embodied in the French Enlightenment could not be so easily dislodged.  

          De Bellaigue begins with three chapters entitled “Cairo,” “Istanbul,” and “Tehran,” concentrating on Egypt, Turkey, and Iran in the first half of the 19th century.   Here he demonstrates how, in a recurrent pattern throughout the first half of the 19th century,  the new ways of thinking arose in the three locations largely as unintended bi-products of regimes where relentless leaders pursued institutional modernization, particularly of the military to defend against foreign incursions.  The succeeding chapters, entitled “Vortex,” and “Nation,” treat the three civilizations collectively, and  center on the increasing interaction and integration between the three in the second half of the century, up to World War I, along with their increasing servitude to the West at a time when European colonial acquisition began to run up against Muslim resistance.  De Bellaigue contends that World War I marked the beginning of the end for the Islamic Enlightenment, setting in motion the forces that undermined the liberalizing tendencies of the previous century.  His final chapter, termed “Counter Enlightenment,” takes us up to the dispiriting present.

          Unlike many works on the Western Enlightenment, de Bellaigue goes beyond a history of ideas.  He is interested in how the new thinking of the Islamic Enlightenment was utilized in the three civilizations as an instrument of transformation — or “modernization,” his preferred term.  His work contains much insightful reflection on the nature of modernity and the process of modernization, as he  addresses not only the intellectual changes that were afoot in the Islamic heartlands during the 19th and early 20th centuries, but also political and economic changes.  This broad focus renders his work something close to a comprehensive history of these lands over the past two centuries.  Along the way, de Belliague introduces an array of thinkers and political leaders, many also religious leaders, few of whom are likely to be familiar to Western readers.

* * *

           By way of background, de Bellaigue begins with a revealing picture of the three civilizations prior to 1798.   Many Western readers will be aware of the flowering of Islamic civilization from approximately the 9th century onward, a period of “glory, prosperity and achievement” (p.xxvi), in which the faith of the Prophet Muhammad created an “aesthetic culture of sophistication and beauty, excelling in architecture, textiles, ceramics and metallurgy” (p.xviii), along with mathematics — the study of algebra originated in the Arab world during this period, for example.  Dynamic centers of learning permitted the “unfettered exercise of the rational mind” (p.xviii) in a way that was unthinkable in Europe during what was considered Christendom’s “dark ages.” But sometime in the 15th century, Islam began to molder and decay, falling victim to the same wave of superstition and defensiveness that had beset Christian Europe after the fall of the Roman Empire.

          Egypt at the time of the Napoleonic conquest, nominally a province of Ottoman Turkey, “hadn’t produced an original idea in years.  Of the world outside Islam – the world of discovery and the Americas, science and the Industrial Revolution – there was a virtual boycott” (p.2).  Napoleon, inspired both by the prior intellectual vigor of Egypt and the transformative potential of the French Revolution, strove to restore the country to its earlier glory under France’s “benign tutelage” (p.4).  Napoleon brought with him a retinue of scholars who acted in the field of knowledge “as his army had acted on the field of battle, pointing to the future and shaming the past” (p.2).  The short-lived French occupation set in motion new ways of thinking that altered Egypt indelibly and, in de Bellaigue’s interpretation, jump-started the Islamic Enlightenment across the Islamic heartlands.

          The first of the Middle East’s “coercive modernizers” (p.18) was Muhammad Ali Pasha.  Although De Bellaigue resists the temptation to label Muhammad Ali the heavyweight champion of 19th Middle Eastern modernization, that is a fair summation of the man who served as the Ottoman Sultan’s viceroy in Egypt from 1805 to 1849.  Ali packed more reforms into the first half of the 19th century than had been carried out in Egypt over the previous 300 years. He reined in Islamic clerics and reformed the state bureaucracy, agriculture, and education.  Above all, he modernized the military, with the Egyptian army becoming “both a symbol and a catalyst of the new Egypt”  (p.21). 

          Muhammad Ali showed little interest in fostering the Enlightenment spirit of irreverence, skepticism and individual empowerment.  But this spirit nonetheless arose as an irrepressible component of modernization.  The interaction with French scholars convinced Hassan al-Atta, arguably the first major thinker of the Islamic Enlightenment, that the progress which had surged through Europe was a universal impulse that could gain traction anywhere, and was in no way foreclosed to Muslim civilizations.  Spellbound by the Frenchmen he met in the aftermath of the Napoleonic conquest, Al-Attar spent many formative years in Istanbul.  When he returned to Egypt, he took up the task of reconciling Islam with secular knowledge in fields as diverse as logic, history, science, medicine and geography.  One of al-Attar’s students, Rifaa al-Tahtawi, known as Rifaa, received from al-Attar “what may have been the most complete education available to any Egyptian at the time,” (p.29), and went on to build upon his teacher’s efforts to show that the Muslim faith was compatible with progressive ideas. 

          Rifaa became the first 19th century Egyptian to study in France, spending five years there in the 1820s.  He wrote a seminal travelogue, the first comprehensive description in Arabic of post-revolutionary France.   Rifaa’s time in France “convinced him of the need for European sciences and technologies to be introduced into the Islamic world” (p.39).  Rifiaa sought to close the distance between modern ideas and the capacity of Arabic to express them.  De Bellaigue characterizes Rifaa as a translator in the “broad sense of someone who fetches ideas from one home and makes them comfortable in another” (p.42).  His translated works had a “huge impact on the engineers, doctors, teachers and military officers who were beginning to form the elite of the country; they were the forerunners of the secular-minded middle classes that would dominate public life for much of the next two centuries” (p.43).

          In the sprawling, multi-faith Ottoman Empire, of which Egypt was but one province, Sultan Mahmud II was the approximate equivalent to Muhammad Ali, and Ibrahim Sinasi the complement to Rifiaa.   As the 18th century came to a close, Ottoman Turkey, although not nearly as backward as Egypt, had suffered a handful of painful military loses to Russia that convinced Mahmud II that the empire sorely needed to upgrade its military, not least to quell separatist tendencies emanating from Muhammad Ali’s Egypt.  But the reforms instituted under Mahumd’s rule went well beyond the military, extending to education, statistics, modern sociology, agricultural innovation and political theory, with some of the most stunning innovations occurring in the education of doctors and the practice of medicine. 

          Like Rifaa, Sinasi spent time in Paris, where he saw the inadequacies of the Turkish language.  Sinasi gave birth to modern Turkish prose and drama.  Emulating Victor Hugo, Sinasi popularized concepts like freedom of expression and natural rights.  Cosmopolitan, outward looking, and drawn to questions of human development, Sinasi was one of the first in the Middle East to “define rights not as conferred from above, but as inseparable from the growth of a law-based society,” making him a “pioneer of a new mode of thinking,” (p.80-81). 

          Iran was more isolated than Turkey or Egypt in the first half of the 19th century, and entered the modern era later and more sluggishly.  Yet, Iran had to contend throughout the century with the persistent meddling of Russia in its affairs, with Britain becoming equally meddlesome as the century progressed.  Iran in the first half of the 19th century had no forthright, determined and durable modernizer comparable to Muhammad Ali or Mahmud II.  It sent no fledgling intellectuals or future leaders to Europe for education.  Powerful Shiite clerics, proponents of “obscurantism, zealotry and fear” (p.129), served as a check on modernization.  

          But Nasser al-Din, who ruled as Iran’s Shah for 48 years, from 1848 to 1896, longer than either Ali or Mahmud II, found an engineer of reform in his tutor and then Chief Minister, Amir Kabir, 30 years older.   During a tenure that lasted only three years, Chief Minister Kabir pursued industrialization and manufacturing, introduced town planning, established a postal service, promoted reforms in medicine, education and agriculture, and reined in the Shiite clergy.  Nasser al-Din had Kabir removed from office, then executed, probably because he was perceived to have been too close to British and Russian diplomats.

          Al-Din’s increasingly tyrannical rule after Kabir’s demise saw the rise of Jamal al-Din Afghani, sometimes credited with being the Middle East’s first advocate of pan-Islamism, a complex set of ideas that revolved around the notion that Muslims needed to transcend state boundaries and stand up to Europeans.  Berating despotism and the European presence throughout the Muslim world, al-Din Afghani “embodied the use of Islam as a worldwide ideology of resistance against Western imperialism, knitting the Islamic heartlands together in a way that today seems impossible” (p.230).

            Backward and isolated Iran made the region’s most dramatic move toward modern nationhood when it underwent a constitutional revolution in 1905 that gave rise to a National Consultative Assembly, Iran’s first parliament.  The new powder of democracy was sprinkled over the land, with unprecedented levels of freedom of speech.  But Russia in 1907 signed an anti-German pact with Great Britain, a portion of which divided Iran in half, with Russia having a sphere of influence in the north, Britain in the south, all the while purporting to honor and respect Iran’s independence.  The two powers encouraged Iran to crack down on the constitutionalists, resulting in the installation of a military dictatorship in the name of the shah.  For the remainder of the century, democrats and constitutionalists in Iran were caught in the middle, with those who favored an unchecked monarchy competing with Shia clerics and their supporters for control over public policy.

          Turkey underwent a similar constitutional revolution following a military mutiny in Macedonia in June 1907.  The military officers formed a key part of a group of “young Turks” who came together to demand that the brutally repressive Sultan Abdulhamid revive and reform the Ottoman constitution of 1876.  With the surprising backing of the Sultan, a new legislative chamber met in December 1908, at a time when the Empire’s hold on its European provinces had begun to unravel.  The defeat by Bulgaria, Serbia, Montenegro and Greece in the First Balkan War in 1912 all but ended the Ottoman presence in Europe.   

          As the first decade of the 20th century closed, Egypt, by then formally a British colony which lacked Iran and Turkey’s experiences with electoral politics, was also developing institutions that might have underpinned a liberal political regime, “if permitted to mature” (p.293).  Across the region, a liberal, modernizing tradition had emerged strongly in the three intellectual and political centers of the Middle East.   In less than a century, de Bellaigue writes, the region had “leaped politically from the medieval to the modern” (p.291).  But World War I constituted an “unmitigated catastrophe” (p.295) for the region.

          The Ottoman Empire, which sided with Germany during the war, ended up as one of the war’s losers and was formally and finally dismantled in its aftermath.  Britain used the war to increase its hold on Egypt and suppress nationalist activity.  Iran, although officially neutral, was violated with impunity during the war, as Turkish, Russian and British armies “ran amok on Iranian soil” in an effort to exploit Iranian oil resources.  By the close of hostilities, Iran seemed “barely to have existed” (p.296). 

          The secret 1916 Sykes-Picot agreement between Britain and France, to which Tsarist Russia assented, divided most of the Middle East into British and French spheres of influence and has come to symbolize the “cupidity and arbitrariness” (p.299) of the Western powers in the Middle East.  But to de Bellaigue, Sykes-Picot was far from being the most consequential among the treaties, declarations, and gentlemen’s agreements that were imposed on the region.  This collection of instruments, “ill-considered, self-interested and indifferent to the desires of its inhabitants” (p.300), created a belt of instability across the region that endures to this day.  The post-war settlements also accelerated the importance of oil for world economies, skewing development and ensuring continued meddling of the West in the region.

          The Islamic counter-Enlightenment which de Bellaigue describes in his final chapter was a “response to the arbitrary settlements that had been imposed by the victors in the First World War” (p.315), expanding revulsion toward the West exponentially across Middle East.  Fueled by the “paradoxical situation of imperialists advocating democracy” (p.315), the revulsion expressed itself in many forms, among them militant nationalism that left little room either for democratic norms or for Islam as a force that could provide internal coherence and strength to the region.

* * *

          Today, de Bellaigue concludes, it is “hard to discern any general movement in favor of liberal, humanist principles in the Middle East” (p.352).  Rather, the trend seems to be toward violence and sectarian hate, which makes it easy to discount the Islamic Enlightenment.  De Bellaigue’s erudite and – yes – enlightening work thus leaves us yearning wistfully that the sparks of new thinking which ignited Islamic civilization in the 19th century might somehow be rekindled in our time. 

Thomas H. Peebles

Washington, D.C., USA

December 15, 2018

3 Comments

Filed under History, Middle Eastern History, Religion

They Kept Us Out of War . . . Until They Didn’t

Michael Kazin, War Against War:

The American Fight for Peace, 1914-18 

            Earlier this month, Europe and much of the rest of the world paused briefly to observe the 100th anniversary of the day in 1918 when World War I, sill sometimes called the Great War, officially ended. In the United States, where we observe Veterans’ Day without explicit reference to World War I, this past November 11th constituted one of the rare occasions when the American public focused on the four-year conflict that took somewhere between 9 and 15 million lives, including approximately 116,000 Americans, and shaped indelibly the course of 20th century history.  In War Against War: The American Fight for Peace, 1914-18, Michael Kazin offers a contrarian perspective on American participation in the conflict.  Kazin, professor of history at Georgetown University and editor of the avowedly leftist periodical Dissent, recounts the history of the diverse groups and individuals in the United States who sought to keep their country out of the conflict when it broke out in 1914; and how those groups changed, evolved and reacted once the United States, under President Woodrow Wilson, went to war in April 1917.

            The opposition to World War I was, Kazin writes, the “largest, most diverse, and most sophisticated peace coalition to that point in U.S. history” (p.xi). It included pacifists, socialists, trade unionists, urban progressives, rural populists, segregationists, and crusaders for African-American rights.  Women, battling at the same time for the right to vote, were among the movement’s strongest driving forces, and the movement enjoyed support from both Democrats and Republicans.  Although the anti-war opposition had a decidedly anti-capitalist strain – many in the opposition saw the war as little more than an opportunity for large corporations to enrich themselves — a handful of well-known captains of American industry and finance supported the opposition, among them Andrew Carnegie, Solomon Guggenheim and Henry Ford.  It was a diverse and colorful collection of individuals, acting upon what Kazin describes as a “profoundly conservative” (p.xviii) impulse to oppose the build up of America’s military-industrial complex and the concomitant rise of the surveillance state.  Not until the Vietnam War did any war opposition movement approach the World War I peace coalition in size or influence.

            This eclectically diverse movement was in no sense isolationist, Kazin emphasizes. That pejorative term that had not yet come into popular usage.  Convinced that the United States had an important role to play on the world stage beyond its own borders, the anti-war coalition sought to create a “new global order based on cooperative relationships between nation states and their gradual disarmament” (p.xiv).  Its members hoped the United States would exert moral authority over the belligerents by staying above the fray and negotiating a peaceful end to the conflict.

             Kazin’s tells his story in large measure through admiring portraits of four key members of the anti-war coalition, each representing one of its major components: Morris Hillquit, a New York labor lawyer and a Jewish immigrant from Latvia, standard-bearer for the Socialist Party of America and left-wing trade unions; Crystal Eastman, a charismatic and eloquent New York feminist and labor activist, on behalf of women; and two legislative representatives, Congressman Claude Kitchen, a populist Democrat from North Carolina and an ardent segregationist; and Wisconsin Republican Senator Robert (“Fighting Bob”) LaFollette, Congress’ most visible progressive. The four disagreed on much, but they agreed that industrial corporations yielded too much power, and that the leaders of American industry and finance were “eager to use war and preparations for war to enhance their profits” (p.xiv).  Other well-known members of the coalition featured in Kazin’s story include Jane Addams, renowned social activist and feminist; William Jennings Bryan, Secretary of State under President Wilson, three-time presidential candidate, and Christian fundamentalist; and Eugene Debs and Norman Thomas, successively perennial presidential candidates of the Socialist Party of America.

            Kazin spends less time on the coalition’s opponents – those who had few qualms about entering the European conflict and, short of that, supported “preparedness” (always used with quotation marks): the notion that the United States needed to build up its land and naval capabilities and increase the size of its military personnel in the event that they might be needed for the conflict.  But those favoring intervention and “preparedness” found their voice in the outsized personality of former president Theodore Roosevelt, who mixed bellicose rhetoric with unadulterated animosity toward President Wilson, the man who had defeated him in a three-way race for the presidency in 1912.  After the United States declared war in April 1917, the former Rough Rider, then fifty-eight years old, sought to assemble his own volunteer unit and depart for the trenches of Europe as soon as the unit could be organized and trained.  To avoid this result, President Wilson was able to steer the Selective Service Act through Congress, establishing the national draft that Roosevelt had long favored – and Wilson had previously opposed.

             Kazin’s story necessarily turns around Wilson and his fraught relationship with the anti-war coalition. Stern, rigid, and frequently bewildering, Wilson was a firm opponent of United States involvement in the war when it broke out in 1914.  In the initial months of the conflict, Wilson gave the anti-war activists reason to think they had a sympathetic ear in the White House.  Wilson wanted the United States to stay neutral in the conflict so he could negotiate a lasting and just peace — an objective that the anti-war coalition fully endorsed.  He met frequently with peace groups and took care to praise their motives.  But throughout 1915, Wilson edged ever closer to the “preparedness” side. He left many on both sides confused about his intentions, probably deliberately so.  In Kazin’s interpretation, Wilson ultimately decided that he could be a more effective negotiator for a lasting and just peace if the United States entered the war rather than remained neutral. As the United States transitioned to belligerent, Wilson transformed from sympathizer with the anti-war coalition to its suppressor-in-chief. His transformation constitutes the most dramatic thread in Kazin’s story.

* * *

              The issue of shipping on the high seas precipitated the crisis with Germany that led Wilson to call for the United States’ entry into the war.  From the war’s outset, Britain had used its Royal Navy to prevent vessels from entering German ports, a clear violation of international law (prompting the quip that Britannia both “rules the waves and waives the rules” (p.25)).  Germany, with a far smaller naval force, retaliated by using its submarines to sink merchant ships headed for enemy ports.  The German sinking of the Cunard ocean liner RMS Lusitania off the coast of Ireland on May 7, 1915, killing more than 1,200 citizens, among them 128 Americans, constituted the beginning of the end for any real chance that the United States would remain neutral in the conflict.

            A discernible pro-intervention movement emerged in the aftermath of the sinking of the Lusitania, Kazin explains.  The move for “preparedness” was no longer just the cry of the furiously partisan or a small group of noisy hawks like Roosevelt.  A wide-ranging group suddenly supported intervention in Europe or, at a minimum, an army and navy equal to any of the belligerents.  Peace activists who had been urging their neutral government to mediate a settlement in the war “now faced a struggle to keep their nation from joining the fray” (p.62).

            After the sinking of the Lusitania, throughout 1916 and into the early months of 1917, “social workers and feminists, left-wing unionists and Socialists, pacifists and non- pacifists, and a vocal contingent of senators and congressmen from both major parties,” led by LaFollette and Kitchin, “worked together to stall or reverse the drive for a larger and more aggressive military” (p.63), Kazin writes.  The coalition benefited from the “eloquent assistance” of William Jennings Bryan, who had recently resigned as Secretary of State over Wilson’s refusal to criticize Britain’s embargo as well as Germany’s attacks on neutral vessels.

            In the aftermath of the sinking of the Lusitania, Wilson grappled with the issue of “how to maintain neutrality while allowing U.S. citizens to sail across the perilous Atlantic on British ships” (p.103).  Unlike the peace activists, Wilson “tempered his internationalist convictions with a desire to advance the nation’s power and status . . . As the crisis with Germany intensified, the idealism of the head of state inevitably clashed with that of citizens whose desire that America be right always mattered far more than any wish that it be mighty” (p.149).

            As events seemed to propel the United States closer to war in late 1916 and early 1917, the anti-war activists found themselves increasingly on the defensive.  They began to concentrate most of their energies on a single tactic: the demand for a popular referendum on whether the United States should go to war.  Although the idea gathered genuine momentum, there was a flagrant lack of support in Congress.  The activists never came up with a plausible argument why Congress should voluntarily give up or weaken its constitutional authority to declare war.

         In his campaign for re-election in 1916 against the Republican Party nominee, former Supreme Court Justice Charles Evans Hughes, Wilson ran as the “peace candidate,” dictated as much by necessity as desire.  “Few peace activists were ambivalent about the choice before them that fall,” Kazin writes.  “Whether as the lesser evil or a decent alterative, a second term seemed the only way to prevent Roosevelt . . . and [his] ilk from grabbing the reins of foreign policy” (p.124).  By September 1916, when Wilson left the White House for the campaign trail, he enjoyed the support of the “most left-wing, class-conscious coalition ever to unite behind a sitting president” (p.125).  Wilson eked out a narrow Electoral College victory in November over Hughes, with war opponents likely putting him over the top in three key states.

             Wilson’s re-election “liberated his mind and loosened his tongue” (p.141), as Kazin puts it.  In January 1917, he delivered to the United States Senate what came to be known as his “peace without victory” speech, in which he offered his vision for a “cooperative peace” that would “win the approval of mankind,” enforced by an international League of Peace. Borrowing from the anti-war coalition’s playbook, Wilson foreshadowed the famous 14 points that would became his basis for a peace settlement at the post-war 1919 Versailles Conference: no territorial gains, self-government and national self -determination for individual states, freedom of commerce on the seas, and a national military force for each state limited in size so as not to become an “instrument of aggression or of selfish violence” (p.141).  Wilson told the Senators that he was merely offering an extension of the United States’ own Monroe Doctrine.  But although he didn’t yet use the expression, Wilson was proposing nothing less than to make the world safe for democracy.  As such, Kazin notes, he was demanding “an end to the empires that, among them, ruled close to half the people of the world” (p.141).

           Wilson’s “stunning act of oratory” (p.142) earned the full support of the anti-war activists at home and many of their counterparts in Europe.  Most Republicans, by contrast, dismissed Wilson’s ideas as an “exercise in utopian thinking” (p.143). But, two months later, in March 1917, German U-boats sank three unarmed American vessels. This was the point of no return for Wilson, Kazin argues.  The president, who had “staked the nation’s honor and prosperity on protecting the ‘freedom of the seas,’ now believed he had no choice but to go to war” (p.172).  By this time, Wilson had concluded that a belligerent America could “end the conflict more quickly and, perhaps, spur ordinary Germans to topple their leaders, emulating their revolutionary counterparts in Russia.  Democratic nations, old and new, could then agree to the just and ‘cooperative’ peace Wilson had called for back in January.  By helping to win the war, the United States would succeed where neutrality had failed” (p.172).

* * *

           As the United States declared war on Germany in April 1917 (it never declared war on Germany’s allies Austria-Hungary and Turkey), it also seemed to have declared war on the anti-war coalition  and anyone else who questioned the United States’ role in the conflict.  The Wilson administration quickly turned much of the private sector into an appendage of the state, concentrating power to an unprecedented degree in the national government in Washington.  It persecuted and prosecuted opponents of the war effort with a ferocity few in the anti-war movement could have anticipated. “In no previous war had there been so much repression, legal and otherwise” (p.188), Kazin writes.  The Wilson administration, its allies in Congress and the judiciary all embraced the view that critics of the war had to “stay silent or suffer for their dissent” (p.189).  Wilson gave a speech in June 1917 in which he all but equated opposition with treason.

          The next day, Wilson signed into law the Espionage Act of 1917, designed to prohibit interference with military operations or recruitment as well as any support of the enemies of the United States during wartime.  The following year, Congress passed the even more draconian Sedition Act of 1918, which criminalized “disloyal, profane, scurrilous, or abusive language” about the government, the flag, or the “uniform of the armed forces” (p.246). The apparatus for repressing “disloyalty” had become “one tentacle of the newly potent Leviathan” (p.192).

            Kazin provides harrowing examples of the application of the Sedition Act.  A recent immigrant from Germany received a ten-year sentence for cursing Theodore Roosevelt and cheering a Germany victory on the battlefield.   Another served time for expressing his view that the conflict was a “rich man’s war and the United States is simply fighting for the money” (p.245); still another was prosecuted and jailed for charging that the United States Army was a “God damned legalized murder machine” (p.245).  Socialist Party and labor leader Eugene Debs received a ten-year sentence for telling party members – at a union picnic, no less – that their voices had not been heard in the decision to declare war.  The administration was unable to explain how repression of these relatively mild anti-war sentiments was helping to make the world safe for democracy.

            Many in the anti-war coalition, understandably, fell into line or fell silent, fearing that they would be punished for “refusing to change their minds” (p.xi). Most activists understood that, as long as the conflict continued, “resisting it would probably yield them more hardships than victories” (p.193).  Those continuing in the shrunken anti-war movement felt compelled to “defend themselves constantly against charges of disloyalty or outright treason” (p.243).  They fought to “reconcile their fear and disgust at the government’s repression with a hope that Wilson might still embrace a ‘peace without victory,’ even as masses of American troops made their way to France and into battle” (p.243).

           Representative Kitchin and Senator La Follette, the two men who had spearheaded opposition to the war in Congress, refrained from expressing doubts publicly about the war effort.  Kitchin, chairman at the time of the House of Representatives’ powerful Ways and Means Committee, nonetheless structured a revenue bill to finance the war by placing the primary burden on corporations that had made “excess profits” (p.244) from military contracts.  La Follette was forced to leave the Senate in early 1918 to care for his ill son, removing him from the storm that would have ensued had he continued to espouse his unwavering anti-war views.  Female activist Crystal Eastman helped create the National Civil Liberties Bureau, a predecessor to the American Civil Liberties Union, and started a new radical journal, the Liberator, after the government prohibited a previous publication from using the mails.  Socialist Morris Hilquit, like La Follette, was able to stay out of the line of fire in 1918 when he contracted tuberculosis and was forced out of New York City and into convalesce in the Adirondack Mountains, 300 miles to the north.

           Although the United States was formally at war with Germany for the last 19 months of a war that lasted over four years, given the time needed to raise and train battle ready troops it was a presence on the battlefield for only six months.  The tardy arrival of Americans on the killing fields of Europe was, Kazin argues, “in part, an ironic tribute to the success of the peace coalition in the United States during the neutral years” (p.260-61).  Hundreds of thousands of Americans would likely have been fighting in France by the summer of 1917 if Theodore Roosevelt and his colleagues and allies had won the fight over “preparedness” in 1915 and 1916.  “But the working alliance between radical pacifists like Crystal Eastman and progressive foes of the military like La Follette severely limited what the advocates of a European-style force could achieve – before Woodrow Wilson shed his own ambivalence and resolved that Americans had to sacrifice to advance self-government abroad and preserve the nation’s honor” (p.260-61).

          * * *

          Kazin’s energetic yet judicious work sheds valuable light on the diverse groups that steadfastly followed an alternate route for advancing self-government abroad – making the world safe for democracy — and preserving their nation’s honor.  As American attention to the Great War recedes in the aftermath of this month’s November 11th remembrances, Kazin’s work remains a timely reminder of the divisiveness of the conflict.

Thomas H. Peebles

La Châtaigneraie, France

November 16, 2018

 

13 Comments

Filed under American Politics, European History, History, United States History

Just How Machiavellian Was He?

 

Erica Benner, Be Like the Fox:

Machiavelli’s Lifelong Quest for Freedom 

            Niccolò Machiavelli (1469-1527), the Florentine writer, civil servant, diplomat and political philosopher, continues to confound historians, philosophers and those interested in the genealogy of political thinking.  His name has become a well-known adjective, “Machiavellian,” referring to principles and methods of expediency, craftiness, and duplicity in politics.  Common synonyms for “Machiavellian” include “scheming,” “cynical,” “shrewd” and “cunning.”  For some, Machiavellian politics constitute nothing less than a prescription for maintaining power at any cost, in which dishonesty is exalted and the killing of innocents authorized if necessary.  Machiavelli earned this dubious reputation primarily through his best known work, The Prince, published in 1532, five years after his death, in which he purported to advise political leaders in Florence and elsewhere – “princes” – on how to maintain power, particularly in a republic, where political leadership is not based on monarchy or titles of nobility and citizens are supposed to be on equal footing.

            But to this day there is no consensus as to whether the adjective “Machiavellian” fairly captures the Florentine’s objectives and outlook.  Many see in Machiavelli an early proponent of republican government and consider his thinking a precursor to modern democratic ideas.  Erica Brenner, author of two other books on Machiavelli, falls squarely into this camp.  In Be Like the Fox: Machiavelli’s Lifelong Quest for Freedom, Benner portrays Machiavelli as a “thorough-going republican,” and a “eulogist of democracy” who “sought to uphold high moral standards” and “defend the rule of law against corrupt popes and tyrants” (p.xvi).   Brenner discounts the shocking advice of The Prince as bait for tyrants.

            Machiavelli wore the mask of helpful advisor, Benner writes, “all the while knowing the folly of his advice, hoping to ensnare rulers and drag them to their ruin” (p.xv).  As a “master ironist” and a “dissimulator who offers advice that he knows to be imprudent” (p.xvi), Machiavelli’s hidden intent was to “show how far princes will go to hold on to power” and to “warn people who live in free republics about the risks they face if they entrust their welfare to one man” (p. xvi-xvii).   A deeper look at Machiavelli’s major writings, particularly The Prince and his Discourses on Livy, nominally a discussion of politics in ancient Rome, reveals Machiavelli’s insights on several key questions about republican governance, among them: how can leaders in a republic sustain power over the long term; how can a republic best protect itself from threats to its existence, internal and external; and how can a republic avoid lapsing into tyranny.

            Benner advances her view of Machiavelli as a forerunner of modern liberal democracy by placing the Florentine “squarely in his world, among his family, friends, colleagues and compatriots” (p.xix).  Her work has some of the indicia of biography, yet is unusual in that it is written almost entirely in the present tense.  Rather than setting out Machiavelli’s ideas on governance as abstractions, she has taken his writings and integrated them into dialogues, using italics to indicate verbatim quotations – a method which, she admits, “transgresses the usual biographical conventions” but nonetheless constitutes a “natural way to show [her] protagonist in his element” (p.xx).  Benner’s title alludes to Machiavelli’s observation that a fox has a particular kind of cunning that can recognize traps and avoid snares.  Humans need to emulate a fox by being “armed with mental agility rather than physical weapons” and developing a kind of cunning that “sees through ruses, decent words or sacred oaths” (p.151).

            Machiavelli’s world in this “real time” account is almost Shakespearean, turning on intrigue and foible in the pursuit and exercise of power, and on the shortsightedness not only of princes and those who worked for them and curried their favor, but also of those who worked against them and plotted their overthrow.  But Benner’s story is not always easy story to follow.  Readers unfamiliar with late 15th and early 16th Florentine politics may experience difficulty in constructing the big picture amidst the continual conspiring, scheming and back-stabbing.  At the outset, in a section termed “Dramatis Personae,” she lists the story’s numerous major characters by category (e.g., family, friends, popes), and readers will want to consult this helpful list liberally as they work their way through her rendering of Machiavelli. The book would have also benefitted from a chronology setting out in bullet form the major events in Machiavelli’s lifetime.

* * *

               Florence in Machiavelli’s time was already at its height as the center of the artistic and cultural flourishing known as the Renaissance.  But Benner’s story lies elsewhere, focused on the city’s cutthroat political life, dominated as it was by the Medici family.  Bankers to the popes, patrons of Renaissance art, and masters of political cronyism, the Medici exercised close to outright control of Florence from the early 15th century until thrown out of power in 1494, with the assistance of French king Charles VIII, at the outset of Machiavelli’s career. They recaptured control in 1512, but were expelled again in 1527, months before Machiavelli’s death, this time with the assistance of Hapsburg Emperor Charles V.  Lurking behind the Medici family were the popes in Rome, linked to the family through intertwining and sometimes familial relationships.   In a time of rapidly shifting alliances, the popes competed with rulers from France, Spain and the mostly German-speaking Holy Roman Empire for worldly control over Florence and Italy’s other city-states, duchies and mini-kingdoms, all at a time when ominous challenges to papal authority had begun to gather momentum in other parts of Europe.

           The 1494 plot that threw Piero de’ Medici out of power was an exhilarating moment for the young Machiavelli.  Although Florence under the Medici had nominally been a republic — Medici leaders insisted they were simply “First Citizens” — Machiavelli and other Florentines of his generation welcomed the new regime as an opportunity to “build a republic in deed, not just in name, stronger and freer than all previous Florence governments” (p.63).  With the Medici outside the portals of power, worthy men of all stripes, and not just Medici cronies, would be “free to hold office, speak their minds, and play their part in the great, messy, shared business of civil self-government” (p.63).

             Machiavelli entered onto the Florentine political stage at this optimistic time.  He went on to serve as a diplomat for the city of Florence and held several high-level civil service positions, including secretary – administrator – for Florence’s war committee.   In this position, Machiavelli promoted the idea that Florence should abandon its reliance upon mercenaries with no fixed loyalties to fight its wars and cultivate its own home grown fighting force, a “citizens’ militia.”

         Machiavelli’s civil service career came to an abrupt halt in 1513, shortly after Guiliano de’ Medici, with the assistance of Pope Julius II and Spanish troops, wrestled back control over Florence’s government. The new regime accused Machiavelli of participating in an anti-Medici coup.  He was imprisoned, tortured, and banished from government, spending most of the ensuing seven years on the family farm outside Florence. Ironically, he had reconciled with the Medici and re-established a role for himself in Florence’s government by the time of the successful 1527 anti-Medici coup, two months prior to his death.   Machiavelli thus spent his final weeks as an outcast in a new government that he in all likelihood supported.

         The Prince and the Discourses on Livy took shape between 1513 and 1520, Machiavelli’s period of forced exile from political and public life, during which he drew upon his long experience in government to formulate his guidance to princes on how to secure and maintain political power. Although both works were published after his death in 1527, Benner uses passages from them — always in italics — to illuminate particular events of Machiavelli’s life.  Extracting from these passages and Benner’s exegesis upon them, we can parse out a framework for Machiavelli’s ideal republic.  That framework begins with Machiavelli’s consistent excoriation of the shortsightedness of the ruling princes and political leaders of his day, in terms that seem equally apt to ours.

                To maintain power over the long term, leaders need to eschew short-term gains and benefits and demonstrate, as Benner puts it, a “willingness to play the long game, to pit patience against self-centered impetuosity” (p.8). As Machiavelli wrote in the Discourses, for a prince it is necessary to have the people friendly; otherwise he has no remedy in adversity” (p.167).  A prince who thinks he can rule without taking popular interests seriously “will soon lose his state . . . [E]ven the greatest princes need to deal transparently with their allies and share power with their people if they want to maintain their state” (p.250).  Governments that seek to satisfy the popular desire are “firmer and last longer than those that let a few command the rest” (p.260).   Machiavelli’s long game thus hints at the modern notion that the most effective government is one that has the consent of the governed.

           Machiavelli’s ideal republic was not a democracy based upon direct rule by the people but rather upon what we today would term the “rule of law.”  In his Discourses, Machiavelli argued that long-lasting republics “have had need of being regulated by the laws” (p.261).  It is the “rule of laws that stand above the entire demos and regulate the relations between ‘its parts,’ as he calls them,” Benner explains, “so that no class or part can dominate the others” (p.275).  Upright leaders should put public laws above their own or other people’s private feelings.  They should resist emotional appeals to ties of family or friendship, and punish severely when the laws and the republic’s survival so demands.  Arms and justice together are the foundation of Machiavelli’s ideal republic.

            Several high-profile executions of accused traitors and subversives convinced Machiavelli to reject the idea that when a republic is faced with internal threats, “one cannot worry too much about ordinary legal procedures or the rights of defendants” (p.121.)  No matter how serious the offense, exceptional punishments outside the confines of the law “set a corrupting precedent” (p.121).  Machiavelli’s lifelong dream that Florence should cultivate its own fighting force rather than rely upon mercenaries to fight its wars with external enemies arose out of similar convictions.

             In The Prince and the Discourses, Machiavelli admonished princes that the only sure way to maintain power over time is to “arm your own people and keep them satisfied” (p.49).  Cities whose people are “free, secure in their livelihood, respected and self-respecting, are harder to attack than those that lack such robust arms” (p.186). Florence hired mercenaries because its leaders didn’t believe their own people could be trusted with arms. But mercenaries, whose only motivation for fighting is a salary, can  just as easily turn upon their employers’ state, hardly a propitious outcome for long-term sustainability.

               During Machiavelli’s time in exile, the disputatious monk Martin Luther posted his Ninety-Five Theses onto a church door in German-speaking Wittenberg, challenging a wide range of papal practices.  Luther’s provocation set in motion the Protestant Reformation and, with it, more than a century of bloody conflict in Europe between Protestants and Catholics.  The Prince became an instrument in the propaganda wars stirred up by the Reformation, Benner contends, with Machiavelli demonized “mostly by men of religion, both Catholic and Protestant” (p.xv), who saw in the Florentine’s thinking a challenge to traditional relations between church and state.

              These men of religion rightly perceived that the  church would have little role to play in Machiavelli’s ideal republic.  In the Discourses, Benner explains, Machiavelli argued that the Christian “sect,” as he called it, had “always declared war on ideas and writings that it could not control – and especially on those that presented ordinary human reasoning, not priestly authority, as the best source of guidance in private and political life” (p.317).  Men flirt with disaster when they purport to know the unknowable under the guise of religious “knowledge.”  For Machiavelli, unchanging, universal moral truths can be worked out only through a close study of human interactions and reflections on human nature.  Instead of praying for some new holy man to save you, Machiavelli advised, “learn the way to Hell in order to steer clear of it yourself” (p. p.282).   These views earned all of Machiavelli’s works a place on the Catholic Church’s 1557 Index of Prohibited Books, one of the Church’s solutions to the heresies encouraged by the Reformation, where they remained until 1890.

* * *

              The ruthlessly  duplicitous Machiavelli – his “evil double” (p.xiv), as Brenner puts it — is barely present in Benner’s account.  Her Machiavelli, an “altogether human, and humane” (p.xvi) commentator and operative on the political stage of his time, exudes few of the qualities associated with the adjective that bears his name.

Thomas H. Peebles

La Châtaigneraie, France

October 25, 2018

 

 

 

 

8 Comments

Filed under Biography, European History, History, Italian History, Political Theory, Rule of Law

Solitary Confrontations

 

Glenn Frankel, High Noon:

The Hollywood Blacklist and the Making of An American Classic 

            High Noon remains one of Hollywood’s most enduringly popular movies. The term “High Noon” is now part of our everyday language, meaning a “time of a decisive confrontation or contest,” usually between good and evil, in which good is often embodied in a solitary person.  High Noon is a fairly simple story, yet filled with tension.  The film takes place in the small western town of Hadleyville.  Former marshal Will Kane, played by Gary Cooper, is preparing to leave town with his new bride, Amy Fowler, played by Grace Kelly, when he learns that notorious criminal Frank Miller, whom Kane had helped send to jail, has been set free and is arriving with his cronies on the noon train to take revenge on the marshal.  Amy, a devout Quaker and a pacifist, urges her husband to leave town before Miller arrives, but Kane’s sense of duty and honor compels him to stay. As he seeks deputies and assistance among the townspeople, Kane is rebuffed at each turn, leaving him alone to face Miller and his gang in a fatal gunfight at the film’s end.

          High Noon came to the screen in 1952 at the height of Hollywood’s anti-communist campaign, best known for its practice of blacklisting, by which actors, writers, directors, producers, and others in the film industry could be denied employment based upon past or present membership in or sympathy for the American Communist Party.  Developed and administered by film industry organizations and luminaries, among them Cecil B. DeMille, John Wayne and future American president Ronald Reagan, blacklisting arose during the early Cold War years as Hollywood’s response to the work of the United States House of Representatives’ Committee on Un-American Activities, better known as HUAC.

            Until surpassed by Senator Joseph McCarthy, HUAC was the driving force in post World War II America’s campaign to uproot communists and communist sympathizers from all aspects of public life.  The Committee exerted pressure on Hollywood personnel with suspected communist ties or sympathies to avoid the blacklist by “cooperating” with the Committee, which entailed in particular “naming names” – identifying other party members or sympathizers.  Hollywood blacklisting had all the indicia of what we might today call a “witch hunt.” Blacklisting also came close to curtailing High Noon altogether.

         Glenn Frankel’s engrossing, thoroughly-researched High Noon: The Hollywood Blacklist and the Making of An American Classic captures the link between the film classic and Hollywood’s efforts to purge its ranks of present and former communists and sympathizers. Frankel places the anti-communist HUAC investigations and the Hollywood blacklisting campaign within the larger context of a resurgence of American political conservatism after World War II – a “right wing backlash” (p.45) — with the political right struggling to regain the upper hand after twelve years of New Deal politics at home and an alliance with the Soviet Union to defeat Nazi Germany during World War II.  There was a feeling then, as today, Frankel explains, that usurpers had stolen the country: “outsiders had taken control of the nation’s civil institutions and culture and were plotting to subvert its security and values” (p.x).   The usurpers of the post-World War II era were liberals, Jews and communists, and “self-appointed guardians of American values were determined to claw it back” (p.x).

          Hollywood, with its “extraordinary high profile” and “abiding role in our national culture and fantasies” (p.xi), was uniquely placed to shape American values and, to many, communists and Jews seemed to be doing an inordinate amount of the shaping.  In an industry that employed nearly 30,000 persons, genuine communists in Hollywood probably never exceeded 350, with screenwriters roughly half of the 350.  But 175 screenwriters, unless thwarted, could freely produce what right-wing politicians termed “propaganda pictures” designed to undermine American values.  Communists constituted a particularly insidious threat because they looked and sounded indistinguishable from others in the industry, yet were “agents of a ruthless foreign power whose declared goal was to destroy the American way of life” (p.x).  That a high portion of Hollywood’s communists were Jewish heightened suspicion of the Jews who, from Hollywood’s earliest days as the center of the film industry, had played an outside role as studio heads, screenwriters, and agents.  Jews in Hollywood were at once “uniquely powerful” and “uniquely vulnerable” to the attacks of anti-Semites who accused them of “using the movies to undermine traditional American values” (p.13).

            Frankel’s account of this struggle over security and values involves a multitude of individuals, primarily in Hollywood and secondarily in Washington, but centers upon the interaction between three: Gary Cooper, Stanley Kramer, and Carl Foreman.  Cooper was the star of High Noon and Kramer its producer.   Foreman wrote the script and was the film’s associate director until his refusal in September 1951 to name names before HUAC forced him to leave High Noon before its completion.  Foreman and Kramer, leftist leaning politically, were “fast-talking urban intellectuals from the Jewish ghettos of Chicago [Foreman] and New York [Kramer]” (p.xvi).  Foreman had been a member of the American Communist Party as a young adult in the 1930s until sometime in the immediate post-war years; Kramer’s relationship to the party is unclear in Frankel’s account.  Cooper was a distinct contrast to Foreman and Kramer in about every respect, a “tall, elegant, and reticent” (p.xvi) Anglo-Saxon Protestant from rural Montana, the product of conservative Republican stock who liked to keep a low profile when it came to politics.

            Although Cooper was the star of High Noon, Foreman emerges as the star in Frankel’s examination of HUAC investigations and blacklisting. Foreman saw his encounter with HUAC in terms similar to those which Cooper, as Will Kane, encountered in Hadleyville: he was the marshal, HUAC seemed like the gunmen coming to kill the marshal, and the “hypocritical and cowardly citizens of Hadleyville” found their counterparts in the “denizens of Hollywood who stood by passively or betrayed him as the forces of repression bore down” (p.xiii).  The filming of High Noon had begun a few days prior to Foreman’s testimony before HUAC and was completed in just 32 days, on what amounted to a shoestring budget of $790,000.  How the 84-minute black-and-white film survived Foreman’s departure constitutes a mini-drama within Frankel’s often gripping narrative.

* * *

         In most accounts, Hollywood’s blacklisting practices began in 1947, when ten writers and directors — the “Hollywood Ten” — appeared before HUAC and refused to answer the committee’s questions about their membership in the Communist Party.  They were cited for contempt of Congress and served time in prison.  After their testimony, a group of studio executives, acting under the aegis of the Association of Motion Picture Producers, fired the ten and issued what came to be known as the Waldorf Statement, which committed the studios to firing anyone with ties to the Communist Party, whether called to testify before HUAC or not.  This commitment in practice extended well beyond party members to anyone who refused to “cooperate” with HUAC.

           Neither Foreman nor Kramer was within HUAC’s sights in 1947.  At the time, the two had banded together in the small, independent Stanley Kramer Production Company, specializing in socially relevant films that aimed to attract “war-hardened young audiences who were tired of the slick, superficial entertainments the big Hollywood studios specialized in and [were] hungry for something more meaningful” (p.59).  In March 1951, Kramer Production became a sub-part of Columbia Pictures, one of Hollywood’s major studios.   In June of that year, while finishing the script for High Noon, Foreman received his subpoena to testify before HUAC.  The subpoena was an “invitation to an inquisition” (p.xii), as Frankel puts it.

           HUAC, in the words of writer David Halberstam, was a collection of “bigots, racists, reactionaries and sheer buffoons” (p.76). The Committee acted as judge, jury and prosecutor, with little concern for basic civil liberties such as the right of the accused to call witnesses or cross-examine the accuser.  Witnesses willing to cooperate with the Committee were required to undergo a “ritual of humiliation and purification” (p.xii), renouncing their membership in the Communist Party and praising the Committee for its devotion to combating the Red plot to destroy America.  A “defining part of the process” (p.xiii) entailed identifying other party members or sympathizers – the infamous “naming of names” — which became an end in itself for the HUAC, not merely a means to obtain more information, since the Committee already had the names of most party members and sympathizers working in Hollywood.  Forman was brought to the Committee’s attention by Martin Berkeley, an obscure screenwriter and ex-Communist who emerges as one of the book’s more villainous characters — Hollywood’s “champion namer of names” (p.241).

           Loath to name names, Foreman had few good options.  The primary alternative was to invoke the Fifth Amendment against self-incrimination and refuse to answer questions. But such witnesses appeared to have something to hide, and often were blacklisted for failure to cooperate with the Committee.  When he testified before HUAC in September 1951, Forman stressed that he loved his country as much as anyone on the Committee and used his military service during World War II to demonstrate his commitment to the United States.  But he would go no further, refusing to name names.  Foreman conceded for the record that he “wasn’t a Communist now, and hadn’t been one in 1950 when he signed the Screen Writers Guild loyalty oath” (p.201).  The Committee did not hold Foreman in contempt, as it had done with the Hollywood Ten.  But it didn’t take Foreman long to feel the consequences of his refusal to “cooperate.”

           Kramer, who had initially been supportive of Foreman, perhaps out of concern that Foreman might name him as one with communist ties, ended by acceding to Columbia Pictures’ position that Foreman was too tainted to continue to work for its subsidiary.  Foreman left Kramer Production with a lucrative separation package, more than any other blacklisted screenwriter. His attempt to start his own film production company went nowhere when it became clear that anyone working for the company would be blacklisted.  Foreman, a “committed movie guy” who “passionately believed in [films] as the most successful and popular art from ever invented” (p.218), was finished in Hollywood.  He and Kramer never spoke again.

* * *

            Kramer had had little direct involvement with the early shooting of High Noon. But after Foreman’s departure, he reviewed the film and was deeply dismayed by what he saw.  He responded by making substantial cuts, which he later claimed had “saved” the film.  But in Frankel’s account, Cooper rather than Kramer saved High Noon, making the film an unexpected success.  Prior to his departure, Foreman had suggested to Cooper, working for a fraction of his normal fee, that he consider withdrawing from High Noon to preserve his reputation.  Cooper refused. “You know how I feel about Communism,” Frankel quotes Cooper telling Foreman, “but you’re not a Communist now and anyhow I like you, I think you’re an honest man, and I think you should do what is right” (p.170-71).

            Kramer and Foreman were initially reluctant to consider Cooper for the lead role in High Noon.  At age fifty, he “looked at least ten years too old to play the marshal.  And Cooper was exactly the kind of big studio celebrity actor that both men tended to deprecate” (p.150).  Yet, Cooper’s “carefully controlled performance,” combining strength and vulnerability, gave not only his character but the entire picture “plausibility, intimacy and human scale” (p.252), Frankel writes.  Will Kane is “no superhuman action hero, just an aging, tired man seeking to escape his predicament with his life and his new marriage intact, yet knowing he cannot . . . It is a brave performance, and it is hard to imagine any other actor pulling it off with the same skill and grace” (p.252).  None of the “gifted young bucks” whom Kramer and Foreman would likely have preferred for the lead role, such as Marlon Brando, William Holden, or Kirk Douglas, could have done it with “such convincing authenticity, despite all their talent.  In High Noon, Gary Cooper is indeed the truth” (p.252).

            High Noon also established Cooper’s co-star, Grace Kelly, playing Marshal Kane’s new wife Amy in her first major film.  Kelly was some 30 years younger than Cooper and many, including Kramer, considered the pairing a mismatch. But she came cheap and the pairing worked. Katy Jurado, a star in her native Mexico, played the other woman in the film, Helen Ramirez, who had been the girlfriend of both Marshal Kane and his adversary Miller.  During the film, she is involved romantically with Kane’s feckless deputy, Harvey Pell, played by Lloyd Bridges.  High Noon was only Jurado’s second American film, but she was perfect in the role of a sultry Mexican woman.  By design, Foreman created a dichotomy between the film’s male hero — a man of “standard masculine characteristics, inarticulate, stubborn, adept at and reliant on gun violence” (p.253) — and its two women characters who do not fit the conventional models that Western films usually impose on female characters.  The film’s “sensitive focus on Helen and Amy – remarkable for its era and genre – is one of the elements that make it an extraordinary movie” (p.255), Frankel contends.

           Frankel pays almost as much attention to the movie’s stirring theme song, “Do Not Forsake Me, Oh My Darling,” sung by Tex Ritter, as he does to the film’s characters.  The musical score was primarily the work of Dimitri Tiomkin, a Jewish immigrant from the Ukraine, with lyricist Ned Washington providing the words.  The pair produced a song that could be “sung, whistled, and played by the orchestra all the way through the film, an innovative approach that had rarely been used in movies before ” (p.230). Ritter’s raspy voice proved ideally suited to the song’s role of building tension in the film (the better known Frankie Laine had a “more synthetic and melodramatic” (p.234) version that surpassed Ritter’s in sales).  The song’s narrator is Kane himself, addressing his new bride and expressing his fears and longings in music.  The song, whose melody is played at least 12 times during the movie, encapsulates the plot while explaining the marshal’s “inner conflict in a way that he himself cannot articulate” (p.232). Its repetition throughout the film reminds us that Kane’s life and happiness are “on the line, yet he cannot walk away from his duty” (p.250).

           Frankel also dwells on the use of clocks in the film to heighten tension as 12:00 o’clock, High Noon, approaches.  The clocks, which become bigger as the film progresses, “constantly remind us that time is running out for our hero.  They help build and underscore the tension and anxiety of his fruitless search for support.  There are no dissolves in High Noon – none of the usual fade-ins and fade-outs connoting the unseen passage of time – because time passes directly in front of us.  Every minute counts – and is counted” (p.250).

           High Noon was an instant success from the time it came out in the summer of 1952, an “austere and unusual piece of entertainment,” as Frankel describes the film, “modest, terse, almost dour . . . with no grand vistas, no cattle drives, and no Indian attacks, in fact no gunplay whatsoever until its final showdown.  Yet its taut, powerful storytelling, gritty visual beauty, suspenseful use of time, evocative music, and understated ensemble acting made it enormously compelling” (p.249).   But the film was less popular with critics, many of whom considered the film overly dramatic and corny.

          The consensus among the cognoscenti was that the film was “just barely disguised social drama using a Western setting and costumes,” as one critic put it, the “favorite Western for people who hate Westerns” (p.256).  John Wayne argued that Marshal Kane’s pleas for help made him look weak.  Moreover, Wayne didn’t like the negative portrayal of the church people, which he saw it as an attack on American values.  The American Legion also attacked the film on the ground that it was infected by the input of communists and communist sympathizers.

* * *

          After leaving the High Noon set, Foreman spent much of the 1950s in London, where he had limited success in the British film industry while his marriage unraveled.  For a while, he lost his American passport, pursuant to State Department policy of denying passports to anyone it had reason to suspect was Communist or Communist-leaning, making him a man without a country until a court overturned State Department policy.  Kramer left Columbia pictures after High Noon.  He went back to being an independent producer and in that capacity established a reputation as Hollywood’s most consistently liberal filmmaker.  To this day, the families of Foreman and Kramer, who died in 1984 and 2001, respectively, continue to spar over which of the two deserves more credit for High Noon’s success.  Cooper continued to make films after High Noon, most of them westerns of middling quality, “playing the same role over and over” (p.289) as he aged and his mobility grew more restricted.  He kept in touch with Foreman up until his death from prostate cancer in 1961.

* * *

         Frankel returns at the end of his work to Foreman’s view of High Noon as an allegory for the Hollywood blacklisting process — a single man seeking to preserve honor and confront evil alone when everyone around him wants to cut and run. But, Frankel argues, seen on the screen at a distance of more than sixty years, the film’s politics are “almost illegible.” Some critics, he notes, have suggested that Kane, rather than being a brave opponent of the blacklist, could “just as readily be seen as Senator Joseph McCarthy bravely taking on the evil forces of Communism while exposing the cowardice and hypocrisy of the Washington establishment” (p.259).  Sometimes a good movie is just a good movie.

Thomas H. Peebles

La Châtaigneraie, France

October 3, 2018

 

6 Comments

Filed under Film, Politics, United States History

Magic Moscow Moment

 

Stuart Isacoff, When the World Stopped to Listen:

Van Cliburn’s Cold War Triumph and Its Aftermath 

            Harvey Lavan Cliburn, Jr., known to the world as “Van,” was the pianist from Texas who at age 23 astounded the world when he won the first Tchaikovsky International Piano Competition in Moscow in 1958, at the height of the Cold War.  The Soviet Union, fresh from launching the satellite Sputnik into orbit the previous year and thereby gaining an edge on the Americans in worldwide technological competition, looked at the Tchaikovsky Competition as opportunity to showcase its cultural superiority over the United States.  Stuart Isacoff’s When the World Stopped to Listen: Van Cliburn’s Cold War Triumph and Its Aftermath takes us behind the scenes of the 1958 competition to show the machinations that led to Cliburn’s selection in Moscow.

            They are intriguing, but come down to this: the young Cliburn was so impossibly talented, so far above his fellow competitors, that the competition’s jurors concluded that they had no choice but to award him the prize.  But before the jurors announced what might have been considered a politically incorrect decision to give the award to an American, they felt compelled to present their dilemma to Soviet party leader and premier Nikita Khrushchev. Considered, unfairly perhaps, a country bumpkin lacking cultural sophistication, Khrushchev asked who had been the best performer.  The answer was Cliburn.  According to the official Soviet version, Khrushchev responded with a simple, straightforward directive: “Then give him the prize” (p.156).

            Isacoff, a professional pianist as well as an accomplished writer, suggests that there was more to Khrushchev’s directive than what the official version allows.  But his response and the official announcement two days later, on April 14, 1958, that Cliburn had won first place make an endearing high point to Isacoff’s spirited biography.  The competition in Moscow and its immediate aftermath form the book’s core, about 60%. Here, Isacoff shows how Cliburn became a personality known worldwide — “the classical Elvis” and “the American Sputnik” were just two of the monikers given to him – and how his victory contributed appreciably to a thaw in Cold War tensions between the United States and the Soviet Union. The remaining 40% of the book is split roughly evenly between Cliburn’s life prior to the Moscow competition, as a child prodigy growing up in Texas and his ascendant entry into the world of competitive piano playing; and his post-Moscow life, fairly described as descendant.

            Cliburn never recaptured the glory of his 1958 moment in Moscow, and his life after receiving the Moscow prize was a slow but steady decline, up to his death from bone cancer in 2013.  For the lanky, enigmatic Texan, Isacoff writes, “triumph and decline were inextricably joined” (p.8).

* * *

            Cliburn was born in 1934, in Shreveport, Louisiana, the only child of Harvey Lavan Cliburn, Sr., and Rildia Bee O’Bryan Cliburn.  When he was six, he moved with his parents from Shreveport to the East Texas town of Kilgore.  Despite spending his earliest years in Louisiana, Cliburn always considered himself a Texan, with Kilgore his hometown.   Cliburn’s father worked for Magnolia Oil Company, which had relocated him from Shreveport to Kilgore, a rough-and-tumble oil “company town.”  We learn little about the senior Cliburn in this biography, but mother Rildia Bee is everywhere. She was a dominating presence upon her son not only in his youthful years but also throughout his adult life, up to her death in 1994 at age 97.

        Prior to her marriage, Rildia had been a pupil of renowned pianist Arthur Friedheim.  It was Southern mores, Isacoff suggests, that discouraged her from pursuing what looked like her own promising career as a pianist.  But with the arrival of her son, she found a new outlet for her seemingly limitless musical energies.  Rildia was “more teacher than nurturer” (p.12), Isacoff writes, bringing discipline and structure to her son, who had started playing the piano around age 3.  From the start, the “sonority of the piano was entwined with his feelings for his mother” (p.12).  By age 12, Cliburn had won a statewide piano contest, and had played with the Houston Symphony Orchestra in a radio concert.  In adolescence, with his father fading in importance, Cliburn’s mother continued to dominate his life. “Despite small signs of teenage waywardness, when it came to his mother, Van was forever smitten” (p.21).

               In 1951, at age 17, Rildia and Harvey Sr., sent their son off to New York to study at the prestigious Juilliard School, a training ground for future leaders in music and dance.  There, he became a student of the other woman in his life, Ukraine-born Rosina Lhévinne, a gold-medal graduate of the Moscow Conservatory whose late husband Josef had been considered one of the world’s greatest pianists.  Like Rildia, Lhévinne too was a force of nature, a powerful influence on the young Cliburn.  Improbably, Lhévinne and Rildia for the most part saw eye to eye on the best way to nurture the talents of the prodigious young man.  Both women focused Cliburn on “technical finesse and beauty of sound rather than on musical structure,” realizing that his best qualities as a pianist “rested on surface polish and emotional persuasiveness” (p.54).  Each recognized that for Cliburn, music would always be “visceral, not abstract or academic.  He played the way he did because he felt it in the core of his being” (p.34).

           More than Rildia, Lhévinne was able to show Cliburn how to moderate and channel these innate qualities.  Without her stringent guidance, Isacoff indicates, Cliburn might have lapsed into “sentimentality, deteriorating into the pianistic mannerisms of a high romantic” (p.56). Although learning through Lhévinne to hold his interpretative flourishes in check, Cliburn’s “overriding personality – emotionally exuberant, and unshakably sentimental – was still present in every bar” (p.121).  By the time he left for the Moscow competition, Cliburn had demonstrated a “natural ability to grasp and convey the meaning of the music, to animate the virtual world that arises through the art’s subtle symbolic gestures. It set him apart” (p.18).

          During his Julliard years in New York, the adult Cliburn personality the world would soon know came into view: courteous and generous, sentimental and emotional.  He had by then also developed the idiosyncratic habit of being late for just about everything, a habit that continued throughout his life.  Isacoff mentions one concert after another in which Cliburn was late by periods that often became a matter of hours.  Both in the United States and abroad, he regularly compensated for showing up late by beginning with America’s national anthem, “The Star Spangled Banner.”  At Juilliard, Cliburn also began a smoking habit that stayed with him for the remainder of his life.  Except when he was actually playing — when he had the habit of looking upward, “as if communing with the heavens whenever the music reached an emotional peak” (p.6) — it was difficult to get a photo of him without a cigarette in his hands or mouth.

           It may have been at Juilliard that Clliburn had his first homosexual relationship, although Isacoff plays down this aspect of Cliburn’s early life.  He mentions Cliburn’s experience in high school dating a girl and attending the senior prom.  Then, a few pages later, he notes matter-of-factly that a friendship with a fellow male Juilliard student had “blossomed into romance” (p.35).  But there are many questions about Cliburn’s sexuality that seem pertinent to our understanding of the man.  Did Cliburn undergo any of the torment that often accompanies the realization in adolescence that one is gay, especially in the 1950s?  Did he “come out” to his friends and acquaintances, in Texas or New York, or did he live the homosexual life largely in the closest?  Were his parents aware of his sexual identity and if so, what was their reaction?  None of these is treated here.

            With little fanfare, Juilliard nominated Cliburn in early 1958 for the initial Tchaikovsky International Competition, taking advantage of an offer of the Rockefeller Foundation to pay travel expenses for one entrant in each of the competition’s two components, piano and violin.  The Soviet Union, which paid the remaining expenses for the participants, envisioned a “high-culture version of the World Cup, pitting musical talents from around the globe against one another” (p.4). The Soviets confidently assumed that showcasing its violin and piano expertise after its technological success the previous year with the Sputnik launch would provide another propaganda victory over the United States.

            Soviet pianists who wished to enter the competition had to pass a daunting series of tests, musical and political, to qualify for the competition, with training similar to that of its Olympic athletes.  Many of the Soviet Union’s emerging piano stars were reluctant to jump into the fray.  Each had a specific reason, along with a “general reluctance to become involved in the political machinations of the event” (p.59).  Lev Vlassenko, a “solid, well-respected pianist” who became a lifetime friend of Cliburn in the aftermath of the competition, emerged as the presumptive favorite, “clearly destined to win” (p.60).

            On the American side, the US State Department only reluctantly gave its approval to the competition, fearing that it would be rigged.  The two pianists whom the Soviets considered the most talented Americans, Jerome Lowenthal and Daniel Pollack, traveled to Moscow at their own expense, unlike Cliburn (pop singer Neil Sedaka was among the competitors for the US but was barred by the Soviets as too closely associated with decadent rock ‘n roll; they undoubtedly did Sedaka a favor, as his more lucrative pop career was just then beginning to take off).  Other major competitors came from France, Iceland, Bulgaria, and China.

            For the competition’s first round, Cliburn was assigned pieces from Bach, Mozart, Chopin, Scriabin, Rachmaninoff, Liszt and Tchaikovsky.  The audience at the renowned Moscow Conservatory, where the competition took place, fell from the beginning for the Texan and his luxurious sound. They “swooned at the crooner in him . . . Some said they discerned in his playing a ‘Russian soul’” (p.121).  But among the jurors, who carried both political and aesthetic responsibilities, reaction to Cliburn’s first round was mixed.  Some were underwhelmed with his renditions of Mozart and Bach, but all found his Tchaikovsky and Rachmaninoff “out of this world,” as one juror put it (p.120).

          Isacoff likens the jurors’ deliberations to a round of speed dating, “where the sensitive antennae of the panelists hone in on the character traits of each candidate. . . There is no magical formula for choosing a winner; in the end, the decision is usually distilled down to a basic overriding question: Do I want to hear this performer again?”(p.117).  Famed pianist Sviatoslav Richter, who served on the jury, emerges here as the equivalent of the “hold out juror” in an American criminal trial, “willing to create a serious ruckus when he felt that the deck was being stacked against the American.  As the competition progressed, his fireworks in the jury room would be every bit the equal of the ones onstage” (p.114).

            Cliburn’s second round program was designed to show range.  Beethoven, Chopin and Brahms were the heart of a romantic repertoire.  He also played the Prokofiev Sixth, a modernist piece that reflected the political tensions and fears of 1940 Russia.  Cliburn received a 15-minute standing ovation at the end of the round, the audience voting literally with its feet and hands.  In the jury room, Richter continued to press the case for Cliburn, although the jury ranked him only third, tied with Soviet pianist Naum Shtarkman. Overall, Vlassenko ranked first and eminent Chinese pianist Shikun Liu second.

            But in the third round, Cliburn blew the competition away.  The round  began with Tchaikovsky’s First Piano Concerto, for which Cliburn delivered an “extraordinary” interpretation, with every tone “imbued with an inner glow, with long phrases concluding in an emphatic, edgy pounce. The effect was simply breathtaking” (p.146). Cliburn’s rendition of Rachmaninoff’s “treacherously difficult” (p.147) Piano Concerto no. 3 was even more powerful.  In prose that strains to capture Cliburn’s unique brilliance, Isacoff explains:

After Van, people would never again hear this music the same way. . . There is no simple explanation for why in that moment Van played perhaps the best concert of his life. Sometimes a performer experiences an instant of artistic grace, when heaven seems to open up and hold him in the palm of its hand – when the swirl of worldly sensations gives way to a pervasive, knowing stillness, and he feels connected to life’s unbroken dance.  If that was not exactly Van’s experience when playing Rachmaninoff Concerto no. 3, it must have come close (p.146-47).

         Cliburn had finally won over even the most recalcitrant jurors, who briefly considered a compromise in which Cliburn and Vlassenko would share the top prize.  But the final determination was left to premier Khrushchev.  The Soviet leader’s instantaneous and decisively simple response quoted above was the version released to the press.  But with the violin component of the competition going overwhelmingly to the Soviets, the ever-shrewd Khrushchev appears to have concluded that awarding the piano prize to the American would underscore the competition’s objectivity and fairness.  One advisor recalled Khrushchev saying to her: “The future success of this competition lies in one thing: the justice that the jury gives” (p.156).  The jury’s official and public decision of April 14, 1958 had Cliburn in first place, with Vlassenko and Liu sharing second.  Cliburn could not have accomplished what he did, Isacoff writes, without Khrushchev, his “willing partner in the Kremlin” (p.206).

        Cliburn had another willing partner in Max Frankel, then the Moscow correspondent for the New York Times (and later, its Executive Editor). Frankel had sensed a good story during the competition and reported extensively on all its aspects.  He also pushed his editors back home to put his dispatches on page 1.  One of his stories forthrightly raised the question whether the Soviets would have the courage to award the prize to Cliburn.  For Isacoff, Frankel’s reporting and the pressure he exerted on his Times editors to give it a prominent place also contributed to the final decision.

             After his victory in Moscow, Cliburn went on an extensive tour within the Soviet Union. To the adoring Russians, Cliburn represented the “new face of freedom.” Performing under the auspices of a repressive regime, he “seemed to answer to no authority other than the shifting tides of his own soul” (p.8).  Naïve and politically unsophisticated, Cliburn raised concerns at the State Department when he developed the habit of describing the Russians as “my people,” comparing them to Texans and telling them that he had never felt so at home anywhere else.

          A month after the Moscow victory, Cliburn returned triumphantly to the United States amidst a frenzy that approached what he had generated in the Soviet Union.  He became the first (and, as of now, only) classical musician to be accorded a ticker tape parade in New York City, in no small measure because of lobbying by the New York Times, which saw the parade as vindication for its extensive coverage of the competition.

          After Cliburn’s Moscow award, the Soviet Union and the United States agreed to host each other’s major exhibitions in the summer of 1959.  It started to seem, Isacoff writes, that “after years of protracted wrangling, a period of true detente might actually be dawning” (p.174).   The cultural attaché at the American Embassy in Moscow wrote that Cliburn had become a “symbol of the unifying friendship that overcomes old rivalries.  . . a symbol of art and humanity overruling political pragmatics” (p.206).

           A genuine if improbable bond of affection had developed in Moscow between Khrushchev and Cliburn. That bond endured after Cold War relations took several turns for the worse, first after the Soviets shot down the American U-2 spy plane in 1960, followed by erection of the Berlin Wall in 1961, and the direct confrontation in 1962 over Soviet placement of missiles in Cuba. The bond even continued after Khrushchev’s fall from power in 1964, indicating that it had some basis beyond political expediency.

           But Cliburn’s post-Moscow career failed to recapture the magic of his spring 1958 moment.  The post-Moscow Cliburn seemed to be beleaguered by self-doubt and burdened by psychological tribulations that are not fully explained here.  “Everyone had expected Van’s earlier, youthful qualities to mature and deepen over time,” Isacoff writes.  But he “never seemed to grow into the old master they had hoped for . . . At home, critics increasingly accused Van of staleness, and concluded he was chasing after momentary success with too little interest in artistic growth” (p.223).  Even in the Soviet Union, where he made several return visits, critics “began to complain of an artistic decline” (p.222).  In these years, Cliburn “developed an enduring fascination with psychic phenomena and astrology that eventually grew into an obsession. The world of stargazing became a vital part of his life” (p.53).

           Cliburn’s mother remained a dominant force in his life throughout his post-Moscow years, serving as his manager until she was almost 80 years old.  As she edged toward 90, she and her son continued to address one another as “Little Precious” and “Little Darling” (p.230).  Her death at age 97 in 1994 was predictably devastating for Cliburn. In musing about his mother’s effect on Cliburn’s career trajectory, Isacoff wonders whether Rildia Bee, the “wind that filled his sails” might also have been the “albatross that sunk him” (p.243).  While many thought that Cliburn might collapse with the death of his mother, by this time he was in a relationship with Tommy Smith, a music student 29 years younger.  With Smith, Cliburn had “at last found a fulfilling, loving union” (p.242). Smith traveled regularly with Cliburn, even accompanying him to Moscow in 2004, where none other than Vladimir Putin presented Cliburn with a friendship award.  Smith was at Cliburn’s side throughout his battle with bone cancer, which took the pianist’s life in 2013 at age 79.

* * *

            Tommy Smith became the happy ending to Cliburn’s uneven life story — a story which for Isacoff resembles that of a tragic Greek hero who “rose to mythical heights in an extraordinary victory that proved only fleeting, before the gods of fortune exacted their price” (p.8).

Thomas H. Peebles

La Châtaigneraie, France

September 5, 2018

 

1 Comment

Filed under American Politics, History, Music, Soviet Union, United States History

The Close Scrutiny of History

Richard Evans, The Third Reich in History and Memory 

            Books about Adolph Hitler’s Third Reich continue to proliferate, filling the reading public’s seemingly insatiable desire for more information about one of history’s most odious regimes.  But spending one’s limited reading time on Hitler, the Nazis and the Third Reich is for most readers not a formula for uplifting the spirit.  Those who wish to broaden their understanding of the Nazi regime yet limit their engagement with the subject are likely to find Richard Evans’ The Third Reich in History and Memory well suited to their needs.  If Peter Hayes’ Why: Explaining the Holocaust, reviewed here earlier this month, was a vehicle to see the dense and intimidating forest of the Holocaust through its many trees, Evans’ work might be considered a close-up look at selected trees within the forest of the Third Reich.

          The Third Reich in History and Memory provides an indication of how broadly our knowledge of the Nazi regime has expanded in the first two decades of the 21st century alone.  The book is compilation of Evans’ earlier reviews of other studies of the Nazi regime, most of which have been previously published.  Evans uses the word “essay” to describe his reviews, and that is the appropriate term. The book consists, as he puts it, of “extended book reviews that use a new study of one or other aspect of the Third Reich as a starting point for wider reflections” (p.x). All reviews/essays were published originally in this century, most since 2010; the oldest dates to 2001. Evans, a prolific scholar who has been Regius Professor of History at Cambridge University, President of Cambridge’s Wolfson College, and Provost of London’s Gresham College, is also the author of the Third Reich Trilogy, a three volume work that is probably the most comprehensive single study of Nazi Germany.

          The “Memory” portion of Evans’ title alludes to what he considers the most remarkable change in historical work on Nazi Germany since the late 20th century, the “increasing intertwining of history and memory,” (p.ix), reflected in particular in several reviews/essays that address post-war Germany.  It is now almost impossible, Evans observes, to write about the Third Reich “without also thinking about how its memory survived, often in complex and surprising ways, in the postwar years” (p.ix). But memory “needs to be subjected to the close scrutiny of history if it is to stand up, while history’s implications for the collective cultural memory of Nazism in the present need to be spelled out with precision as well as with passion” (p.x; the collection does not include a review of Lawrence Douglas’ The Right Wrong Man, reviewed here in July 2017, an account of the war crimes trial of John Demjanjuk and a telling reminder of the limits of memory of Holocaust survivors).

            The book contains 28 separate reviews, arranged into seven sections: German antecedents to the Third Reich; internal workings of the regime; its economy; its foreign policy; its military decision-making; the Holocaust; and the regime’s after effects.  Each of the seven sections contains three to six reviews; each review is an individual chapter, with each chapter only loosely related to the others in the section.  The collection begins with chapters on Imperial Germany’s practices in its own colonies prior to World War I and the possibility of links to the Nazi era; it ends with a chapter on post-World War II German art and architecture, and what they might tell us about the Third Reich’s legacy.  In between, individual chapters look at a diverse range of subjects, including Hitler’s mental and physical health; his relationship with his ally Benito Mussolini; the role of the Krupp industrial consortium in building the German economy in the 1930s and 1940s; and the role of the German Foreign Office in the conduct of the war.  In these and the book’s other chapters, Evans reveals his mastery of unfamiliar aspects of the Third Reich.

* * *

            Germany’s pre-World War I colonies seemed an irrelevance and were largely forgotten in the years immediately following World War II.  But with the emergence of what is sometimes called post-colonial studies, historians “now put racism and racial ideology instead of totalitarianism and class exploitation at the center of their explanations of National Socialism [and] . . . the history of the German colonizing experience no longer seem[s] so very irrelevant” (p.7).  Evans’ two initial chapters, among the most thought-provoking in the collection, review two works addressing the question of the extent to which Germany’s colonial experience prior to World War I may have established a foundation for its subsequent attempt to subjugate much of Europe and eliminate European Jewry: Sebastian Conrad, German Colonialism: A Short History; and Shelley Baranowski, Nazi Empire: Colonialism and Imperialism from Bismarck to Hitler.

          Germany’s pre-World War I overseas empire was short-lived compared to that of the other European powers.  It came into being, largely over Bismarck’s objections, in the 1880s, and ended abruptly with Germany’s defeat in World War I, after which it was stripped of all its overseas territories (along with much of its European territory).  But in the final decades of the 19th century, Germany amassed an eclectic group of colonies that by 1914 constituted Europe’s 4th largest empire, after those of Great Britain, France and the Netherlands.  It included, in Africa, Namibia, Cameroon, Tanganyika (predecessor to Tanzania), Togo, and the predecessors to Rwanda and Burundi, along with assorted Pacific Islands.

           In its relatively brief period as an overseas colonizer, Germany earned the dubious distinction of being the only European power to introduce concentration camps, “named them as such and deliberately created conditions so harsh that their purpose was clearly as much to exterminate their inmates as it was to force them to work” (p.6). Violence, “including public beatings of Africans,” was “a part of everyday life in the German colonies” (p.10). In a horrifying 1904-07 war against the Herrer and Nama tribes in Namibia, Germany wiped out half of the population of each, one of the clearest instances of genocide perpetrated by a European power in Africa. Germany alone among the European powers banned racial intermarriage in their colonies.  Yet Evans, writing both for himself and the two works under review, cautions against drawing too direct a line between the pre-World War I German colonial experience and the atrocities perpetrated in World War II.  German colonialism, he concludes, “does seem to have been more systematically racist in conception and more brutally violent in operation that that of other European nations, but this does not mean it inspired the Holocaust” (p.13).

         Almost all chapters in the book intersect in some way with the Holocaust and thus with Hayes’ work.  But that intersection is most evident in the sixth of the seven sections, “The Politics of Genocide,” where Evans reviews Timothy Snyder’s Bloodlands: Europe Between Hitler and Stalin, Mark Mazower’s Hitler’s Empire and, in a chapter entitled “Was the ‘Final Solution’ Unique?,” a compendium of German essays addressing this question.  This chapter, itself originally in German but revised and translated into English for this volume, confronts the argument that the Holocaust was a crime without precedent or parallel in history, so appalling that it is “illegitimate to compare it with anything else” (p.365).  Evans dismisses this argument as “theological.”

          Comparison “doesn’t mean simply drawing out similarities,” Evans argues, it also means “isolating differences and weighing the two” (p.365).  If the Holocaust was unique, the “never again” slogan becomes meaningless.  Ascribing categorical uniqueness to the Holocaust may be rewarding for theologians, he writes.  But, sounding much like Hayes, he reminds us that the historian must approach the Holocaust in the “same way an any other large-scale historical phenomenon, which means asking basic, comparative questions and trying to answer them at the level of secular rationality” (p.365).  Asking comparative questions at this level nevertheless leads Evans to find a unique quality to the Holocaust, without parallel elsewhere: its sweeping, racialist ideological underpinnings.

          The Nazi genocide of the Jews was unique, Evans contends, in that it was intended to be geographically and temporally unlimited.  To Hitler, the Jews were a world enemy, a “deadly, universal threat” to the existence of Germany that had to be “eliminated by any means possible, as fast as possible, as thoroughly as possible” (p.381).  The Nazis’ obsessive desire to be “comprehensive and make no exceptions, anywhere, is a major factor distinguishing the Nazis’ racial war from all other racial wars in history” (p.376-77).  Young Turkish nationalists launched a campaign of genocide against the Armenian Christian minority in Anatolia.  But the Armenians were not seen as part of a world conspiracy against the Turks, as the Germans saw the Jews.  The 1994 assault by Hutus on Tutsis in the former German colony of Rwanda was also geographically limited.   Moreover, both the Soviet Union and Nazi Germany occupied Poland during World War II after the August 1939 Ribbentrop-Molotov non-aggression pact (detailed in Roger Moorhouse’s The Devils’ Alliance: Hitler’s Pact With Stalin, 1939-41, reviewed here in May 2016).  The Soviet occupation of Poland, albeit brutal, was carried out to implement ideological goals but was “not an attempt to exterminate entire peoples” (p.367).

           This difference between the Soviet and Nazi occupation in Poland leads Evans to a severe reproach of Bloodlands, Timothy Snyder’s otherwise highly-acclaimed examination of the mass murders conducted by the Soviets and the Nazis in Poland, Ukraine, Belarus, Russia and the Baltic States during the 1930s and the war years, in which Snyder emphasizes similarities between the policies and practices of the two regimes.  Most prominent among Evans’ numerous objections to Bloodlands  is that its comparison of Hitler’s plans for Eastern Europe with Stalin’s mass murders in the same geographic areas “distracts attention from what was unique about the extermination of the Jews. That uniqueness consisted not only in the scale of its ambition, but also in the depth of the hatred and fear that drove it on” (p.396).  Bloodlands, Evans concludes, “forms part of a post-war narrative that homogenizes the history of mass murder by equating Hitler’s policies with those of Stalin” (p.398).  We “do not need to be told again about the facts of mass murder,” he petulantly intones, but rather to “understand why it took place and how people could carry it out, and in this task Snyder’s book is of no use” (p.398).

         Mazower’s Hitler’s Empire, the third work under review in the section on the Holocaust, draws a more sympathetic review. Mazower considered the policies and practices of the German occupation of much of Europe during World War II against the backdrop of the British and other European empires.  Hitler’s empire, Evans writes, was the “shortest-lived of all imperial creations, and the last” (p.364).  But for a brief moment in the second half of 1941, it seemed possible that the Nazis’ megalomaniac vision of world domination, taking on Great Britain and the United States after defeating the Soviet Union, might become reality.  The Nazis, however, had “no coherent idea of how their huge new empire was to be made to serve the global purposes for which it was intended” (p.358).  Mazower’s “absorbing and thought-provoking account,” Evans concludes, paradoxically “makes us view the older European empires in a relatively favorable light.  Growing up over decades, even centuries, they had remained in existence only through a complex nexus of collaboration, compromise and accommodation. Racist they may have been, murderous sometimes, even on occasion exterminatory, but none of them were created or sustained on the basis of such a narrow or exploitative nationalism as animated the Nazi empire” (p.364).

           Three of the works which Evans reviews will be familiar to assiduous readers of this blog: R.H. Douglas’ Orderly and Humane: The Expulsion of the Germans after the Second World War (reviewed here in August 2015); Heike Görtemaker’s Eva Braun: Life With Hitler (March 2013); and Ian Kershaw’s The End: Hitler’s Germany, 1944-45 (December 2012).   All three earn Evan’s high praise.  Douglas’ book tells the little-known story of the expulsion of ethnic Germans, Volkdeutsch, from Czechoslovakia, Poland, Hungary, Yugoslavia and Romania in 1945 and 1946, into a battered and beaten Germany.  It is one example of research on post-war Germany, where the “subterranean continuities with the Nazi era have become steadily more apparent” (p.x).

            Douglas breaks new ground by showing how the ethnic cleanings of “millions of undesirable citizens did not end with the Nazis but continued well into the years after the fall of the Third Reich, though this time directed against the Germans rather than perpetrated by them” (p.x).  His work thus constitutes a “major achievement,” at last putting the neglected subject matter on a scholarly footing.  Orderly and Humane “should be on the desk of every international policy-maker as well as every historian of twentieth century Europe.  Characterized by assured scholarship, cool objectivity and convincing detail,” Douglas’ work  is also a “passionate plea for tolerance and fairness in a multi-cultural world” (p.412).

           The central question of Görtemaker’s biography of Eva Braun, Hitler’s mistress (and his wife for 24 hours, before the newlyweds committed suicide in the Berlin bunker on the last day of April 1945), is the extent to which Braun was knowledgeable about, and therefore complicit in, the enormous war crimes and crimes against humanity engineered by the man in her life.  Evans finds highly convincing Görtemaker’s conclusion that Braun was fully cognizant of what her man was up to: “There can be little doubt that Eva Braun closely followed the major events of the war,” he writes, and that she “felt her fate was bound inextricably to that of her companion’s from the outset” (p.160; I was less convinced, describing Görtemaker’s case as based on “inference rather than concrete evidence,” and noting that Görtemaker conceded that the question whether Braun knew about the Holocaust and the extermination of Europe’s Jewish population “remains finally unanswered”).

            Ian Kershaw is a scholar of the same generation as Evans who rivals him in stature as a student of the Nazi regime — among his many works is a two-volume biography of Hitler.  His The End provides the grisly details on how and why Germany continued to fight in the second half of 1944 and the first half of 1945, when it was clear that the war was lost.  It is, Evans writes, a “vivid account of the last days of Hitler’s Reich, with a real feel for the mentalities and situations of people caught up in a calamity which many didn’t survive, and which those who did took years to overcome” (p.351).

            The remaining chapters in the collection address subjects equally likely to be unfamiliar yet of interest to general readers.  Of course, the advantage of a collection of this sort is that readers are not obliged to read every chapter; they can pick and choose among them.  One editorial weakness to the collection is the absence of any indication at the beginning of each chapter of the specific work under review and where it was first published.  Evans rarely mentions the work under review until well into the chapter. There is a list of “Acknowledgements” at the end that sets out this information.  But the initial entries are in the wrong order, adding confusion and limiting the utility of the list.

* * *

            Evans’ reviews/essays are impressive both for their breath and their depth.  Throughout, Evans proves to be an able guide for readers hoping to draw informed lessons from recent works about the Third Reich.

Thomas H. Peebles

La Châtaigneraie, France

August 25, 2018

4 Comments

Filed under European History, German History, History

Not Unfathomable

Peter Hayes, Why: Explaining the Holocaust 

            In thinking about the Holocaust, Nazi Germany’s project to exterminate Europe’s Jewish population, words like “incomprehensible” or “unfathomable” often come to mind as the only adequate descriptors.  How could Germany, that cultured land of Beethoven and Bach, Hegel and Kant, become the country of mass murderers Hitler and Himmler? How could so many “ordinary Germans,” to borrow a phrase from the Holocaust debate, buy into, support, and participate in crimes of this enormity?  Rational, evidence-based responses to such questions frequently seem inadequate.  Might the Holocaust be a subject for which the conventional tools of historical explanation are insufficient?  Not at all, Peter Hayes argues in Why: Explaining the Holocaust, where he applies the conventional tools to identify and answer the most critical questions about the Holocaust.

      Characterizing the Holocaust as “incomprehensible” or “unfathomable,” Hayes writes, leads us to “despair, to give up, to admit to being too lazy to make the long effort, and, worst of all, to duck the challenge to our most cherished illusions about ourselves and each other that looking into the abyss of this subject entails” (p.326; disclosure: I have used the word “unfathomable” to describe the Holocaust on several occasions on this blog; e.g. here, 4th paragraph; here, final paragraph).  Hayes, professor emeritus at Northwestern University, Chair of the Academic Committee of the United States Holocaust Memorial Museum, and the author of numerous works on Nazi Germany and the Holocaust, recognizes that a “coherent explanation of why such ghastly carnage erupted from the heart of civilized Europe in the twentieth century seems still to elude people” (p.xiii).  But he asks that we approach the Holocaust “neither in awe nor in anger,” viewing it as a “set of historical events, to be recovered, studied and comprehended by the usual historical means,” that is, “‘carefully and soberly,’ with a mix of precision and feeling, and without engaging in sentimentality or sanctification” (p.325).  The alternative to trying to understand how and why the Holocaust happened is to “capitulate to a belief in fate, divine purpose, or sheer randomness in human events” (p.326).

            Hayes begins this highly analytical yet eminently readable work with an introduction asking the ever-pertinent question, “Why Another Book on the Holocaust?” — an introduction which he might well have titled “Why Would Anyone Want to Study the Holocaust?”  He then moves to seven substantive “why” questions he considers essential in understanding the Holocaust, each a separate chapter:

  • Targets: Why the Jews?
  • Attackers: Why the Germans?
  • Escalation: Why Murder?
  • Annihilation: Why This Swift and Sweeping?
  • Victims: Why Didn’t More Jews Fight Back More Often?
  • Homelands: Why Did Survival Rates Diverge? And
  • Onlookers: Why Such Limited Help from the Outside?

For each question, Hayes provides his assessment and summation of the state of current scholarship and research on that aspect of the Holocaust.  In his final chapter, “Aftermath,” he asks: “What Legacies, What Lessons?”  Given the proliferation of specialized research on the Holocaust, Hayes offers his readers what he terms a “comprehensive stocktaking directed squarely at answering the most central and enduring questions about why and how the massacre of European Jewry unfolded” (p.xvi).  His book thus serves as a compact primer on the scholarship involving the Holocaust, rendering it a valuable device to see a dense and intimidating forest through its many trees.

* * *

          Hayes’ evidence-based answers to the book’s seven questions build a narrative of a “perfect storm,” in which centuries of antipathy toward Jews culminated in the enormous crimes of the Holocaust, fueled by Adolph Hitler’s murderous ideology that posited Jews as Germany’s implacable enemy.   The initial chapter, “Targets: Why the Jews,” contains an overview of this antipathy, “deeply rooted in religious rivalry and superstition” (p.35).  Underlying Christian Europe’s historic hostility toward Jews is the notion that Jews had “common repellent and/or ruinous qualities that set them apart from non-Jews.  Descent is determinative; individuality is illusory” (p. 3).

         The latter half of the 19th century gave rise to what we might term “modern anti-Semitism.”  In 1879, the word “anti-Semitism” itself entered into popular usage — an inapt, artificial word, Hayes points out, targeting speakers of Semitic languages, whose syntax and grammatical structure is different from standard European languages, yet largely exempting speakers of Arabic, also a Semitic language (Hayes also prefers the Hebrew word “Shoah,” meaning destruction, to “Holocaust,” derived from the ancient Greek term for an “offering totally consumed by fire,” i.e. a religious sacrifice).  In the same time frame, pseudo-scientific racial theories based on eugenics came into vogue to justify discrimination against Jews.  By the late 19th century, Hayes notes, overt hostility toward Jews correlated closely with the economy, becoming more severe during economic downturns.  It was generally harshest in Russia and Eastern Europe, Germany and the Austrian portions of the Hapsburg Empire.

            “Attackers: Why the Germans” explores the particularities of anti-Semitism in German speaking lands from the 19th century through the rise of Adolph Hitler’s Nazi party during the Great Depression of the late 1920s, but prior to Hitler’s appointment as Weimar Germany’s Chancellor in January 1933.   Even before Germany became a unified state in 1871, 19th century nationalism in German-speaking lands tended to be less inclusive and more tribal than that in Britain or France.  Having developed in reaction to and rejection of the French conquest and occupation under Napoleon, German nationalism “crystallized around the only idea that could unite so much difference, the notion that all the tribes were related and parts of a common people, or Volk” (p.38), a notion that left little room for outsiders, especially Jews.  Before World War I, however, anti-Semitism in Germany was “loud, quotable, recurrent, but it had little political traction or legislative success” (p.44).

            Germany’s defeat in World War I provided the basis for political traction.  The myth of the “stab in the back” quickly arose in the aftermath of the defeat: the German army had been on the cusp of victory in the field, only to be undermined on the home front by an insidious coalition of Jews and leftists.  By this time, German anti-Semitism had been further linked to Bolshevism in the Soviet Union and the specter of violent revolution throughout Germany and Europe.  In this environment, Hitler’s toxic brand of anti-Semitism thrived, a “witches’ brew of self-pity, entitlement, and aggression, but simultaneously a “form of magical thinking that promised to end all of Germans’ postwar sufferings, the products of defeat and deceit, by banishing their supposed ultimate cause, the Jews and their agents” (p.65).

            Hayes describes Hitler’s ideology as a “bastardized Marxism that substituted race for class” (p.61), in which history is the struggle among races to control space or territory and the Jews are Germany’s most implacable enemy.  The Nazis’ relentless effort to portray the persecution of Jews as acts of self-defense is critical to understanding the subsequent Holocaust, Hayes emphasizes, “so essential as a justification for what the Nazis wanted to do that it repeatedly appears in new forms: They threaten us, so we must strike to protect ourselves” (p.60-61).  But the immediate force behind Hitler’s rise to power was the widespread economic crisis of the Great Depression.

          Contrary to widely held popular belief, Hayes stresses that anti-Semitism did not play a decisive or primary role in bringing the Nazis to power.  Most Germans who voted for the Nazi party did so in spite of its anti-Semitism, not because of it.  Germany’s dire economic situation increased receptivity to the Nazi message and reduced anti-Semitism as a disqualifier for office.  At a time when no political party appeared to have a plan to deal with the economy, the Nazis seemed more dynamic to many Germans than the other major parties: they were authoritarian, unalterably opposed to the faltering Weimer democracy, and able to get things done.  The promise of Nazism was to “restore all that was best in Germany’s traditions yet also to revolutionize the country” (p.68).  But without the collusion of conservative leaders, who expected to use Hitler for their purposes during the economic crisis, the Nazis would not have come to power.

            In his middle chapters, Hayes explains how the project to exterminate Germany’s Jewish population came about in increments once the Nazis gained power in 1933.  Despite what Hitler had said in Mein Kampf, it was in no sense inevitable or preordained that extermination would become the Nazi end game. Nazi policy at the outset concentrated on harassment, intimidation, isolation and dispossession, but stopped short of killing, even though killing metaphors were part of everyday Nazi rhetoric.  At the core of the Nazi vision was an “unwavering dream of a Jew-free environment, since that was a precondition of German strength and happiness” (p.65).

          During the Nazis’ early years in power, Germans divided generally into three groups: “people who endorsed the persecution of the Jews, people who merely accepted it, and people who disliked it but saw little point in protesting, even though they frequently expressed reservations or felt embarrassed about specific actions” (p.98).   Prior to the savage riots of November 1938 known as Kristallnacht, German public opinion generally accepted anti-Semitic policies “except when they threatened the self-interest of non-Jews” (p.99). Violence and viciousness toward Jews “increased steadily during the 1930s in Nazi Germany and in full public view . . . yet the pattern gave rise to too little rejection or revulsion to make the Nazi regime change course” (p.99-100).

       The point at which persecution gave way to mass killing most likely occurred sometime in the second half of 1941, the outgrowth of a series of meetings between Hitler and Himmler after Germany had invaded the Soviet Union in June of that year.  In October 1941, Himmler issued an instruction that forbade further emigration of Jews from the European continent. This document “clearly signaled the end to the policy of driving Jews away . . . and suggested that the Nazis had found a new approach to the Jewish problem” (p.123).  Nazi leaders by then knew they had the means to kill people en masse in gas chambers and began constructing sites to do so. “The Final Solution, the annihilation of the Jews of Europe, was in motion” (p.125).

          But why did so many Germans enthusiastically embrace and participate in state-sponsored killing?  Hayes confronts this question in his chapter, “Annihilation: Why This Swift and Sweeping?” and throughout much of the rest of the book.  This question probably gives rise to more differences and less consensus among historians than any of the book’s other questions.   Here, Hayes surveys the scholarly literature on the subject, in particular Daniel Goldhagen’s Hitler’s Willing Executioners (1996), and Christopher Browning’s Ordinary Men (1992).  Goldhagen, in a work which “the public loved and most historians panned” argued that Germans killed Jews “because they wanted to; they wanted to because they universally hated Jews; and they hated Jews because Germans always had – their nation’s culture had been thoroughly and pervasively anti-Semitic for hundreds of years” (p.137-38; curiously, Goldhagen’s provocative work found a particularly receptive audience in the reunited Germany of the 1990s).  Browning’s more nuanced study “maintains that anti-Semitic convictions had little to do with the readiness of Germans to commit murder; rather, they acted out of loyalty to one another” (p.138).

           Hayes attempts to summarize and synthesize these and similar works with several salient points.  Above all, he argues, the Nazi regime “succeeded in creating a closed mental world, an ideological echo chamber in which leaders constantly harped on the threat the Jews supposedly constituted and the need for Germans to defend themselves against it” (p.144).  Rank and file Nazi perpetrators engaged in self-delusion, developing a “capacity to distract themselves from what they were doing by calling it something else. Perpetrators never owned up to torturing and slaughtering; they always professed to be serving a sanctified purpose that immunized them from the charge of immorality” (p.154).   Many embraced anti-Semitism as a “conveniently available form of legitimizing what they had been ordered to do. . . . [T]hey did not kill because they hated their victims, but they decided to hate them because they thought they had to kill them” (p.139).

          Hayes dismisses the question of his chapter on victims, “Why Didn’t More Jews Fight Back More Often,” as one posed by succeeding generations “from the comfort of living in liberal and law-observing societies” (p.176).  There was more Jewish resistance than is commonly realized.   Overall, however, the Jewish response to the Nazi onslaught was to “comply with German demands and orders in hopes of preventing them from getting worse” (p.177).   Jews took arms only when they knew the alternative was near-certain death.  The  odds were “stacked against them, because they could not see or could not bear to see what was going to happen to them, because the slimmest chance that some might survive tempted them to avoid committing suicide by fighting back, and because they clung to life as best they could in ever more adverse circumstances” (p.196-97).

         Hayes thus rejects the provocative charge of Raul Hilberg and Hannah Arendt, two early leaders in the study of the Holocaust who contended in the 1960s that the destruction of European Jewry can be explained primarily through the lack of Jewish resistance and complicity in the killing itself.  Their “harsh accusations have not stood up to historical analysis over the past forty years “ (p.178).  Hayes nonetheless recognizes that in the ghettos established in Poland and elsewhere in Eastern Europe, usually as a prelude to deportation to death camps, the Nazis frequently delegated responsibility for carrying out German instructions to Jewish Councils, a “diabolically effective” means of minimizing the resources they needed to police the Jews and making them “complicit in their own persecution.  In effect, the Nazis applied the tried-and-true colonial practice of indirect rule through favored natives who got privileges or exemptions for punishments in exchange for helping to control everyone else” (p.180).

            The system of divide and conquer functioned in the camps “to the same diabolical effect that it operated in the ghettos. . . The Germans exploited internal divisions and individuals’ will to live right up until the dissolution of the camps” (p.217).   It is, Hayes concludes, both “unfair and inaccurate to hold the Jewish victims responsible for what happened to them.” Whether they lived or died “depended on two things alone: the actions of the Nazi regime and the progress of the Allied armies” (p.195-197).

          As to why help from the outside was so limited, the subject of the penultimate chapter, “Onlookers,” Hayes examines the individual policies of the countries best situated to help Jewish victims: France, Belgium, the Netherlands, Great Britain and the United States.  Overall, a “combination of anti-Semitism and economic and political interests worked to restrict the admission of Jews to other countries throughout the Holocaust and to inhibit other action on their behalf. Sooner or later, every nation that might have helped decided that it had higher priorities than aiding or defending the Jews” (p. 259-60).   The same could be said of the major non-governmental organizations, such as the International Committee of the Red Cross, and almost every transnational religious institution, especially the Catholic Church.  The fate of the Jews of Europe was “always a matter of secondary importance to everyone but themselves and the regime that wished to kill them” (p.296).

* * *

          In his conclusion, “Aftermath: What Legacies, What Lessons,” Hayes returns to those of us still grappling to comprehend the Holocaust. What transpired during the fateful years 1933-45 was “not mysterious and inscrutable,” he writes.  It was “the work of humans acting on familiar human weaknesses and motives: wounded pride, fear, self-righteousness, prejudice, and personal ambition being among the most obvious” (p.342) — qualities hardly in short supply in today’s public sphere.  Among my own take away lessons in the aftermath of reading Hayes’ thoughtful work: the word “unfathomable” is unlikely to be used again in these pages to describe the Holocaust.

Thomas H. Peebles

La Châtaigneraie, France

August 14, 2018

 

 

8 Comments

Filed under Uncategorized

Inside the Mind and Time of Victor Hugo

 

 

 

David Bellos, The Novel of the Century:

The Extraordinary Adventure of Les Misérables 

            When first published on April 4, 1862, Victor Hugo’s novel Les Misérables was an immediate best seller – in today’s parlance, a “blockbuster” but also, at 1,900 pages in the original French, a “doorstopper” (the English translation was a mere 1,500 pages).  Hugo in 1862 was among France’s most revered writers, but was then living in exile on the Channel Island of Guernsey, having fled several years earlier from what he considered the dictatorial regime of Louis-Napoléon, better known as Napoléon III.  Hugo intended Les Misérables, his epic tale of reconciliation and redemption, with its searing portraits of the poor and those at society’s margins, to be the culmination of his already illustrious career as a novelist, poet and playwright.  It didn’t take long after Les Misérables’ initial publication for Hugo to conclude that his novel would easily meet his lofty aspirations.

             Over a century and half later, Hugo’s Les Misérables remains in the forefront of literary classics, still read in the original French and in countless translations in all the world’s major languages.  Within weeks of its publication, moreover, Les Misérables was turned into a play, and in the 20th century became the subject of more adaptations for radio, stage and screen than any just about any other literary work.  But David Bellos, professor of French and comparative literature at Princeton University, worries that Les Misérables’ extraordinary staying power and its enduring mass market appeal has led too many to dismiss the novel as a work that falls below the level of great art.

            In The Novel of the Century: The Extraordinary Adventure of Les Misérables, Bellos seeks to dispel such notions by getting inside Victor Hugo’s mind and his time as he pieced together Les Misérables.   Much like Alice Kaplan’s Looking For “The Stranger: Albert Camus and the Life of a Literary Classic, reviewed here in April, Bellos’ work could be considered a “biography of a book.”  In an introductory chapter, “The Journey of Les Misérables,” Bellos provides an overview to the novel, its setting and its multiple twists and improbable turns, all highly useful for readers who have not read the novel for several years if at all.

       Here he introduces the novel’s principal characters: Jean Valjean, famously sentenced to hard labor for stealing a loaf of bread, whose twenty-year quest to rehabilitate himself constitutes the novel’s “narrative backbone” (p.xviii); Fantine, an abandoned single mother who loses her job, falls into prostitution and meets an early death; her illegitimate daughter Cosette, entrusted to Valjean’s care after her mother’s death; Javert, the policeman who pursues Valjean relentlessly throughout the novel; the inn-keeping couple the Thénardiers, and their urchin children, Éponine and Gavroche; and Marius, a student and budding political activist who falls in love with Cosette.

              Les Misérables consists of five parts, with 48 “books” (Bellos too has divided his work into five parts, surely not coincidentally).  Hugo’s Part I is entitled “Fantine;” Part II, “Cosette,” in which the young girl is saved by Valjean from cruel foster parents after her mother’s death; Part III, “Marius,” focusing on the student’s life on the barricades in his fight to overcome the monarchy; Part IV, “The Idyll of Rue Plumer and the Epic of Rue Saint Denis,” two Parisian streets, the first where the love affair of Cosette and Marius blossomed, the second where Marius fought in a political barricade; and Part V, simply “Jean Valjean.”  Each of the 48 books has chapters, 365 in all.  With many of the chapters quite short, Bellos suggests a chapter per day over the course of a year for those who want to read or reread the novel.

               The individuals who surrounded Hugo as he wrote Les Misérables loom as large in Bellos’ work as the characters in the novel itself.   Hugo and his wife Adèle Foucher had five children, the first of whom died in infancy.  Their oldest daughter Léopoldine died in a boating accident at age 19, the “gravest emotional wound in Hugo’s life “ (p.98). Their last child, daughter Adèle, kept a diary from an early age that provides a major portion of the record about the evolution of Les Misérables,.  Adèle was in the forefront of an innovative campaign to market the novel across Europe (her unrequited love for a British military officer was the subject of the 1975 François Trauffaut film, The Story of Adèle H).  Hugo’s older son Charles also played a major role in arranging for publication of Les Misérables, while younger son François-Victor became a literary heavyweight in his own right through his translations into French of the major works of Shakespeare.

           An additional presence throughout Bellos’ account is Hugo’s long-term  mistress, Juliette Drout, an aspiring actress who followed Hugo into exile.  While living in quarters separate from the Hugo family, Juliette became Hugo’s regular traveling companion and served informally as his secretary and confidante (Juliette was traveling with Hugo when he learned of daughter Léopoldine’s death).  But Bellos adds that Hugo was a “serial philanderer” (p.30), with ample supplements to his on-going extra-marital liaison with Juliette and his legal attachment to wife Adèle.

          Les Misérables begins in 1815 and extends to 1835.  Hugo wrote the novel in fits and starts between 1845 and 1862.  The period between 1815 and 1862 encompasses some of the most dramatic upheavals of France’s turbulent and often violent 19th century.  The final defeat of Napoléon Bonaparte at the Battle of Waterloo and the “Bourbon restoration” of Louis XVIII as a constitutional monarch took place in the fateful year 1815.  By 1862, France was in the midst of the “Second Empire” of Louis-Napoléon (Napoléon III), the nephew of Napoléon Bonaparte, who in a coup d’etat in 1851 had ended France’s Second Republic, the event that sent Hugo into exile.  In addition to the 1851 coup, the first half of the century witnessed periodic uprisings against the government, among them: the 1830 “July Revolution,” which ousted Louis XVIII’s successor Charles X in favor of Louis-Philippe d’Orleans; a mini-1832 rebellion which unsuccessfully sought to reverse the 1830 July Revolution, an uprising critical to Hugo’s novel but less so to French history; and the February 1848 revolution in which Louis-Napoléon deposed Louis-Philippe and established the Second French Republic, an uprising in which Hugo was directly involved.

            Bellos’ account shines in its illumination of how these events and the broader currents of 19th century French history affected both Hugo himself and the novel he was working on.  To enhance our understanding of the novel and its seventeen year gestation period, Bellos includes what he terms “interludes,” short digressions on diverse but pragmatic subjects, such as regional and class differences in language in Hugo’s time; money and credit in 19th century France; intellectual property protection and the technical process involved in publishing books in the mid-19th century; and transportation in the time of Les Misérables (people walked a lot more then than they do today).  Bellos also delves into how Hugo’s political and religious views entered into his novel.

            Although Les Misérables is a “progressive” work which “surely expresses moral outrage at the plight of the poor,” (p.219), Bellos cautions that it should not be considered a tract for the emerging views of the European left.  Subtly, however, the novel traced out a “limited if still ambitious program of social action” (p.202-03): more humane criminal justice, with easier entry back into society for offenders, more education, and more jobs for the uneducated.  Hugo, who had never been baptized and did not subscribe to any established religion or cult, considered Les Misérables to be a religious but not Catholic work.  Hugo’s novel argues for “natural religion” capable of bridging the conflicts between Catholics and non-Catholics, and between believers and non-believers, conflicts which in Hugo’s view exacerbated the disparities between rich and poor.  Les Misérables is thus, as Bellos puts it, a “work of reconciliation — between the classes, but also between the conflicting currents that turn our own lives into storms. It is not a reassuring tale of the triumph of good over evil, but a demonstration of how hard it is to be good” (p.xxiv).

* * *

             Bellos notes that Les Misérables was already an “historical” novel when it first appeared in 1862.  With its story set in a past that had ended over a quarter of a century earlier, the novel could immediately be read as an “exercise in nostalgia for a vanished world . . . [and as] an unintended guide to the way things used to be” (p.54).  To dig into the novel’s 1815-to-1835 period is thus to dig into Hugo’s own adolescence and his formative early adult years.  The son of a soldier who fought in Napoleon Bonaparte’s wars, Hugo turned 13 in 1815.

               A precocious literary youth, by 1815 Hugo had already demonstrated a flair for poetry.  By 1832, the year he turned 30, Hugo was among France’s best-known poets who had published a handful of novels, among them the immensely popular Notre Dame de Paris (known in English as The Hunchback of Notre Dame). 1832 marked the death of Germany’s Johann Wolfgang von Goethe, the “undisputed eminence of European literature for the preceding half-century.”  Bellos notes that Hugo considered himself the logical candidate to step into Goethe’s shoes as “European genius-in-chief” (p.4). 1832 was also the year of the unsuccessful two-day revolt against the July monarchy, a minor episode in France’s 19th century which Hugo elevated to the center of Les Misérables through Marius’ participation in its events.

               The first draft of Hugo’s novel, whose title was initially Les Misères, was written in Paris between November 1845 and February 1848. Although this draft no longer exists, scholars have concluded that its plot corresponds closely to that of the final version.  1845 was also the year when Hugo was appointed a peer in France’s upper legislative chamber.  He was working on Marius’ involvement in the 1832 upheaval at the time of the 1848 uprising against the regime of King Louis-Philippe, and found himself, improbably, on the front lines defending the regime – an “experience like no other Hugo had ever had, and not easy to square with his views, his feelings, and his position” (p.47-48).  Hugo’s role in in the suppression of the popular revolt of 1848 was, Bellos argues, “what he had to come to terms with to carry on with his book, and what he [had] to come terns with in his book if it [was] to be the ‘social and moral panorama’ that he intended it to be” (p.113-14).

              Hugo’s position as an establishment figure ended definitively when he became one of the most outspoken and relentless critics of Napoleon III’s 1851 coup d’état.  Forced into exile, he fled initially to Brussels and from there to the Channel Islands, outposts of the British crown off the coast of France. After living first on the Channel Island of Jersey, Hugo and his entourage landed in Guernsey in 1855, with his draft novel gathering dust in a trunk.  He established residence for his family at an elaborate mansion known as Hauteville House, overlooking the sea.  Juliette was assigned to a smaller house nearby.  In 1859, Napoleon III issued an amnesty to those who had opposed his seizure of power in 1851.  Many of the exiles on Channel Islands chose to return to France, but Hugo elected to stay.  But it was not until April 25, 1860, that Hugo went back to the trunk that had followed him from Jersey and pulled out the musty pages of the work he had spent little time on since 1848.

            From that date onward, Bellos’ narrative gathers momentum as he traces the frenetic period that followed.   By this time, Hugo had changed the name of his work from Les Misères to Les Misérables, his innovative term that shifted the meaning from the “poor,” “pitiable” or “despicable” to something more inclusive that suggests solidarity among the less fortunate: a “moral and social identity that had no name before” (p.103).  Hugo finally settled on the names of most of his characters in early 1861. These names have become so familiar, Bellos observes, that it “takes an effort to realize that they all had to be invented, for none of them was taken from the existing stock of French first and family names” (p.115).  Hugo did not finalize Jean Valjean’s name until March 1861.  Previously he had been Jean Tréjan, Jacques Sou and Jean Vlajean.

            Hugo’s work was technically covered by the same contract that had paid him in the 1830s for Notre Dame de Paris.  Because of concerns that the novel might be subject to censorship or litigation if published in France, Hugo shifted to Albert Lacroix and his Brussels-based, politically liberal micro-publishing firm.  Hugo needed a buy out of his original contract and overall wanted more for Les Misérables than had ever been paid to an author for any book. He largely got it, nearly 3 million British pounds in today’s currency, with about 40% of that amount being paid to him up-front, in cash, prior to publication.  Hugo’s deal with Lacroix, worked out in a single day when Lacroix visited Hugo at Hauteville House without having read the draft of the novel, was thus the “contract of the century,” to use the title of one of Bellos’ chapters.

            Hugo got his cash payment on time, in December 1860, because Oppenheim Bank of Brussels agreed to lend money to Lacroix to pay for the book.  For Hugo, debt and crime were two sides of the same coin, and Bellos notes the irony of a novel “so firmly opposed to debt being launched on the back of a major loan – probably the first loan ever made by a merchant bank to finance a book,” thereby placing Les Misérables “at the vanguard of . . . the use of venture capital to fund the arts” (p.143).

          Hugo was still working on the latter portions of the novel when Parts I and II appeared in print on April 4, 1862.  A full two months later, on June 14, 1862, Hugo “corrected the last galley of the last volume of Les Misérables and dispatched it to Brussels.”  Over the course of the previous nine months, he had “turned a single-copy manuscript of a still unfinished work into the greatest publishing sensation of his age” (p.260).

             While Hugo was confined to Hauteville House finalizing his novel, daughter Adèle was in Paris serving as the publicity manager for its launch, working with her brother Charles and Lacroix, both in Brussels. Adèle had to raise the interest and enthusiasm level for the novel to a “pitch so high it would discourage the authorities from banning or seizing the book.  But she also had to let not a scrap of it be seen in advance. The requirement to boost the book while keeping it secret made the publicity manager’s job a work of art” (p.223).   Adèle promoted the book through a billboard campaign.  She also gave advance portions to newspapers, but told them they couldn’t print them until she gave a go ahead.  Thanks to the advance work, the book had been “trumped in all the media then available” in France, a “country that the author refused to enter” (p.228).

            Les Misérables was to go on sale in other major European cities outside France at the same time.  Adèle, Charles and Lacroix thus devised what Bellos labels the “first truly international book launch,” but with an infrastructure that was “barely ready for it: paddle steamers, a rail network that still had more gaps than connections, four-horse diligences and maybe, on the approaches to St. Petersburg, a jingling three-horse sleigh” (p.228).

             From its initial appearances, there was an electricity attached to Hugo’s novel that is difficult for us to fathom more than a century and a half later. The first two parts of Les Misérables sold out in France in two days. The crush for the first copies “verged on a riot” (p.231).  Groups of workers pooled their limited means to buy a copy of the book, passed it around among members of the group, and took turns reading its nearly 2,000 pages to fellow workers who were unable to read.  But the French press did not share readers’ enthusiasm for Les Misérables.  Left wing and socialist critiques were lukewarm; those in the right wing press were stinging.

          Outside France, one recurring criticism of the novel was that it was too rooted in French history, and thus lacked deep meaning for non-French reading audiences. These criticisms were not unfounded, Bellos points out.  Underlying Les Misérables was Hugo’s view that France was the “moral and intellectual powerhouse of the world,” with Les Misérables serving as the “first full formulation of the conventional explanation of the exceptional status of France” (p.235).  One of the larger purposes of Les Misérables, which begins at the end of France’s revolutionary period, was to make the French Revolution the “well-spring of nineteenth-century civilization and so to heal the bleeding wound that it bequeathed to subsequent generations of French men and women” (p.38).

            When the publisher of the first Italian translation of Les Misérables fretted that the legacy of the French Revolution had little relevance to his readers, Hugo responded with a “grandiose reply,” in which he “pulled out all the rhetorical stops” (p.237).  Hugo said that while he did not know whether Les Misérables would be read by all, he had written it for everyone. “I write,” Hugo explained:

with a deep love for my country but without preoccupying myself with France more than any other nation. As I grow older I grow simpler and become increasingly a patriot of humanity.  That is the trend of our times and the law of radiation of the French Revolution. To respond to the growing enlargement of civilization, books must stop being exclusively French, Italian, German, Spanish or English, and become European; more than that, human (p.237).

* * *

          As if to respond himself to the Italian publisher and others in Hugo’s time who considered Les Misérables too Franco-centric, Bellos concludes that the novel’s “moral compass,” extends “far beyond the history, geography, politics and economics of the world in which its story is set. The novel achieves the extraordinary feat of being at the same time an intricately realistic portrait of a specific place and time, a dramatic page-turner with masterful moments of theatrical suspense and surprise, an encyclopedia of facts and ideas and an easily understood demonstration of generous moral principles that we could do far worse than to apply to our lives today” (p.259).  Bellos’ conclusion could also be considered a final riposte to those modern-day skeptics who doubt whether Les Misérables rises to the level of great art.  Few readers of Bellos’ erudite yet easy-to-read account are likely to side with the skeptics.

Thomas H. Peebles

La Châtaigneraie, France

July 17, 2018

 

 

7 Comments

Filed under French History, Literature, Uncategorized