Category Archives: American Politics

Medieval Scholar On the Front Lines of Modern History

 

Robert Lerner, Ernst Kantorowicz:

A Life (Princeton University Press)

          Potential readers are likely to ask themselves whether they should invest their time in a biography of a medieval historian, especially one they probably had never heard of previously.  Ernst Kantorowicz (1895-1963) may be worth their time because he was more than just one of the 20th century’s most eminent historians of medieval Europe, a scholar who changed the way we look at the Middle Ages, although for many readers that alone should be sufficient to warrant their time.   But Kantorowicz’s life story is only in part that of an academic.  It also encompasses some of the 20th century’s most consequential moments.

             A German Jew, Kantorowicz fought in the Kaiser’s army in World War I, then took up arms on three separate occasions on behalf of Germany in the chaotic and often violent period immediately following the war.  After the Nazis took power, Kantorowicz became one of the fiercest academic critics of the regime.  Forced to flee Germany in 1938, Kantorowicz wound up in the United States, where he became, like Hannah Arendt, Albert Einstein and scores of others, a German Jewish émigré who enriched incalculably American cultural and intellectual life.  He landed at the University of California, Berkeley.  But just as he was settling comfortably into American academic life, Kantorowicz was fired from the Berkeley faculty when he refused to sign a McCarthy-era, Cold War loyalty oath – although not before distinguishing himself as the faculty’s most vocal and perhaps most eloquent opponent of the notion of loyalty oaths. 

          In Ernst Kantorowicz: A Life, Robert Lerner, himself a prominent medieval historian who is professor emeritus at Northwestern University, painstakingly revisits these turbulent 20th century moments that Kantorowicz experienced first hand.  He adds to them his analyses of Kantorowicz’ scholarly output and creative thinking about medieval Europe, by which Kantorowicz earned his reputation as one of the “most noted humanistic scholars of the twentieth century” (p.387).  Lerner also demonstrates how Kantorowicz transformed from a fervently conservative German nationalist in the World War I era to an ardently liberal, anti-nationalist in the post-World War II era.  And he adds to this mix Kantorowicz’s oversized personality and unconventional personal life: urbane, witty, and sometimes nasty, Kantorowicz was a “natty dresser, a noted wine connoisseur, and a flamboyant cook” (p.4) who was also bi-sexual, alternating between men and women in his romantic affairs.  Lerner skillfully blends these elements together in this comprehensive biography, arranged in strict chronological form.

          Although Kantorowicz’s life’s journey encompassed well more than his time and output as an academic, he was a student or teacher at some of the world’s most prestigious academic institutions: Heidelberg in the 1920s, Oxford in the 1930s, the University of California, Berkeley, in the 1940s, and the Institute for Advanced Study, in Princeton, New Jersey, in the 1950s.  His stints in Heidelberg and Oxford produced the two major influences on Kantorowicz’s intellectual life: Stefan George and Maurice Bowra.  In Heidelberg, Kantorowicz fell under the spell of George, a mesmerizing poet and homoerotic cult-like leader who espoused anti-rationalism, anti-modernism and hero worship.  In the following decade at Oxford, he met Maurice Bowra, a distinguished classicist, literary critic, and part time poet, known for his biting wit, notorious quips, and “open worship of pleasure” (p.176).  George and Bowra are easily the book’s two most memorable supporting characters. 

          Kantorowicz’s life, like almost all German Jews of his generation lucky enough to survive the Hitler regime, breaks down into three broad phases: before, during and after that regime.  In Kantorwicz’s case, the first may be the most captivating of the three phases.

* * *

          Ernst Kantorowicz was born in 1895 in Posen, today Poznań and part of Poland but then part of Prussian Germany.  The son of a prosperous German-Jewish liquor manufacturer, Kantorowicz volunteered to fight for the Kaiser in World War I.  Wounded at Verdun, the war’s longest and costliest battle, Kantorowicz was awarded an Iron Cross for his valiant service on the Western Front.  In early 1917, Kantorowicz was dispatched to the Russian front, and thereafter to Constantinople.   In Turkey, he was awarded the Iron Crescent, the Turkish equivalent of Iron Cross.  But his service in Turkey came to an abrupt end when he had an affair with a woman who was the mistress of a German general. 

          In the immediate post-war era, Kantorowicz fought against a Polish revolt in his native city of Posen; against the famous Spartacist uprising in Berlin in January 1919 (the uprising’s 100th anniversary last month seems to have passed largely unnoticed); and later that year against the so-called Bavarian Soviet Republic in Munich.  In September 1919, Kantorowicz matriculated at the University of Heidelberg, ostensibly to study economics, a sign that he intended to take up his family business from his father, who had died earlier that year.  But while at Heidelberg Kantorowicz also developed interests in Arabic, Islamic Studies, history and geography.  In 1921, he was awarded a doctorate based on a slim dissertation on guild associations in the Muslim world, a work that Lerner spends several pages criticizing (“All told it was a piece of juvenilia . . .  [C]oncern for proof by evidence and the weighing of sources were absent.  Nuance was not even a goal;” p.65). 

          Kantorowicz in these years was plainly caught up in the impassioned nationalist sentiments that survived and intensified in the wake of Germany’s defeat in the war and the humiliating terms imposed upon it by the Treaty of Versailles.  In 1922, he wrote that German policy should be dedicated to the destruction of France.  His nationalist sentiments were heightened in Heidelberg when he came under the spell of the poet-prophet Stefan George, one of the dominant cultural figures in early 20th century Germany.

          George was a riveting, charismatic cult figure who groomed a coterie of carefully selected young men, all “handsome and clever” (p.3).  Those in his circle (the George-Kreis in German) were “expected to address him in the third person, hang on his every word, and propagate his ideals by their writings and example” (p.3).  George read his “lush” and “esoteric” poetry as if at a séance (p.69).  Since George took beauty to be the expression of spiritual excellence, he often asked young men to stand naked before the others, as if models for a sculptor. 

          George was “firmly antidemocratic” and rhapsodized over an idealized leader who would “lead ‘heroes’ under his banner” (p.80).  By means of George’s teaching and influence, the young men of the George-Kreis were expected to “partake of his wisdom and become vehicles for the arduous but inevitable triumph of a wonderfully transformed Germany,” (p.72), a land of “truth and purity” (p.3).  George urged Kantorowicz to write a “heroic” biography of 13th century Holy Roman emperor Frederick II (1194-1250), at various times King of Sicily, Germany, Jerusalem and the Holy Roman Empire.  George considered Frederick II the embodiment of the leadership qualities that post-World War I Germany sorely lacked.

          Kantorowicz’s esoteric and unconventional biography came out in 1927, the first full-scale work on Frederick II to be published in German.  Although written for a popular audience, the massive work (632 pages) appeared at a time when German scholars recognized that the work had filled a void.  Out of nowhere, Lerner writes, along came the 31 year old Kantorowicz, who had “never taken a university course in medieval history” (p.107), offering copious detail about Frederick II’s reign.  Although the book lacked documentation, it was obviously based on extensive research.  The book proved attractive for its style as much as its substance.  Kantorowicz demonstrated that he was a “forceful writer, taken to employing high-flown rhetoric, alliteration, and sometimes archaic diction for dramatic effect” (p.101). Moreover, he utilized unconventional sources, such as legends, prophecies, manifestoes, panegyrics, and ceremonial chants.

           But Kantorowicz’s work was controversial.  Being published without footnotes led some to charge that he was making up his story, a charge he later rebutted with copious notes.  Others found the biography too enthusiastic, and insufficiently dispassionate and objective.  To many, it seemed to celebrate authoritarianism and glorify German nationalism.  Kantorowicz portrayed Frederick as a tragic hero and the idealized personification of a medieval German nation.  Although not religious, Lerner finds that Kantorowicz came close to implying that the hand of God was at work in Frederick’s achievements.  Early versions of the book carried a swastika on the cover, and the Nazis seemed to like it, even though written by a Jew.  Their affinity for the book may have been one reason Kantorowicz later sought to put distance between himself and the work that established his scholarly reputation.

          In 1924, while preparing the biography, Kantorowicz traveled to the Italian portions of Frederick’s realm, where he was deeply impressed with the remains of the ancient Greeks.  The journey converted him into a Hellenophile, a lover of ancient Greek civilization.  From that point forward, even though Kantorowicz’s publications and his academic life continued to center on the Middle Ages, his emotional commitment lay with the ancients, another indication of George’s influence. 

          In 1930, Kantorowicz’s work on Frederick II earned him a teaching position at the University of Frankfurt, only 50 miles from Heidelberg but an altogether different sort of institution.  Prosperous merchants, including many Jews, had founded the university only in 1914, and it was among the most open of German universities to Jewish scholars.   In the winter of 1932, Kantorowicz acceded to a full professorial position at Frankfurt.  But his life was upended one year later when the Nazis ascended to power, beginning the second of his life’s three phases.

* * *

          Ever an elitist, Kantorowicz looked down upon the Nazis as “rabble” (p.159), although there is some indication that he initially approved of the Nazis’ national-oriented views, or at least found them substantially co-terminus with his own.  But by the end of 1933, his situation as a Jewish professor had become “too precarious for him to continue holding his chair” (p.158), and he was forced to resign from the Frankfurt faculty.  He found plenty of time for research because he could no longer teach, comparing himself to Petrarch as a  “learned hermit” (p.185).

            After resigning from the faculty at Frankfurt, Kantorowicz gained a six-month, non-paying fellowship at Oxford in 1934.  The fellowship transformed Kantorowicz into a life-long anglophile and enabled him to improve his English, a skill that would be vital to his survival when he had to flee Germany a few years later.  Almost everyone Kantorowicz met at Oxford was on the political left, and the German nationalist began unmistakably to move in this direction during his Oxford sojourn.  Renowned French medievalist Marc Bloch was at Oxford at the same time.  The two hit it off well, another  indication that Kantorowicz’s nationalist and anti-French strains were mellowing. 

            But the most lasting relationship arising out of Kantorowicz’s fellowship at Oxford was with Maurice Bowra, as eccentric in his own way as George.  An expert on ancient Greek poetry, Bowra was famous for his spontaneous, off-color aphorisms.  Isaiah Berlin termed Bowra the “greatest English wit of his day” (p.176). Bowra was as openly gay as one could be in 1930s England, and had an affair with Kantorowicz during the latter’s time at Oxford.  Although their romance cooled thereafter, the two remained in contact for the remainder of Kantorowicz’s life.  Lerner sees Bowra replacing George as the major intellectual influence upon Kantorowicz after his stint at Oxford.   

            Back in Germany by mid-1934, Kantorowicz received the status of “professor emeritus” that provided regular payments of a pension at full salary “as if he had retired at the end of a normal career” (p.186).  That Kantorowicz remained in Germany in these years demonstrated to some that he was a Nazi sympathizer, a view that Lerner vigorously rejects.  “No German professor other than Ernst Kantorowicz spoke publicly in opposition to Nazi ideology throughout the duration of the  Third Reich” (p.171),  Lerner insists. But Kantorowicz barely escaped arrest in the wake of the violent November 1938 anti-Semitic outburst known as Kristallnacht.  Within weeks, he had fled his native country  — thereby moving into the third and final phase of his life’s journey.

* * *

            After a brief stop in England, Kantorowicz found himself in the fall of 1939 at the University of California, Berkeley, where he gained a one-year teaching appointment.   Until he was awarded a full professorship in 1945, he faced unemployment each year, rescued at the last minute by additional one-year appointments.  The four years from June 1945 until June 1949, Lerner writes, were “probably the happiest in Ernst Kantorowicz’s life.”  He considered himself to be in a “land of lotus-eaters . . . Conviviality was unending, as was scholarly work”  (p.294).  He was smitten by the pretty girls in his classes, and had a prolonged affair with a cousin who lived with her husband in Stockton, some 50 miles away, but had a car.  By this time the fervent German nationalist had become, just as fervently, an anti-nationalist well to the left of the political center who worried that the hyper-nationalism of the Cold War was leading inevitably to nuclear war and identified strongly with the struggle for justice for African-Americans.     

            Substantively, Lerner characterizes Kantorowicz’s scholarly work in his Berkeley years as nothing short of amazing.  He began to consider Hellenistic, Roman and Early Christian civilizations collectively, finding in them a “composite coherence” (p.261), perhaps a predictable outgrowth of his affinity for the ancient civilizations.  Kantorowicz’s perspective foreshadowed the late 20th century tendency to treat these civilizations together as a single “world of late antiquity.”  He was also beginning to focus on the emergence of nation states in Western Europe.  In part because of uncertainty with the English language, Kantorowicz wrote out all his lectures, and they are still available.  Browsing through them today, Lerner writes, “one can see that they not only were dazzling in their insights, juxtapositions, and sometimes even new knowledge but also were works of art, structurally and rhetorically” (p.273). 

            If the years 1945 to 1949 were the happiest of Kantorowicz’s life, the period from July 1949 through August 1950, one of the hottest periods in the Cold War, was almost as trying as his time in Germany under the Nazi regime.  Berkeley President Robert Sproul imposed an enhanced version of a California state loyalty oath on the university’s academic employees, with the following poison pill: “I do not believe in, and I an not a member of, nor do I support any party or organization that believes in, advocates, or teaches the overthrow of the United States Government by force or by any illegal or unconstitutional means” (p.313).  The oath affected tenured as well as non-tenured instructors — it was no oath, no job, even for the most senior faculty members.

           Kantorowicz refused to sign the oath. One Berkeley faculty member recalled years later that Kantorowicz had been “undoubtedly the most militant of the non-signers” (p.317).  Invoking his experience as an academic in Hitler’s Germany, Kantorowicz argued that even if the oath appeared mild, such coerced signing was always the first step toward something stronger.  He termed the requirement a “shameful and undignified action,” an “affront and a violation of both human sovereignty and professional dignity,” requiring a faculty member to give up “his tenure . . . his freedom of judgment, his human dignity and his responsible sovereignty as a scholar” (p.314). Professional fitness to teach or engage in research, Kantorowicz argued, should be determined by an “objective evaluation of the quality of the individual’s mind, character, and loyalty, and not by his political or religious beliefs or lawful associations”  (p.326).

             In August 1950, Kantorowicz and one other survivor of Nazi Germany were among several Berkeley faculty members officially expelled from the University.  Their dismissals were subsequently reversed by a state court of appeals in 1952, but on the technical ground that the university couldn’t carve out separate oaths for faculty members.  The California Supreme Court affirmed the decision in October 1952, which entitled Kantorowicz to reinstatement and severance pay.  But by that time he had left Berkeley for the prestigious Institute for Advanced Studies in Princeton, New Jersey (technically separate from Princeton University).

          The Princeton phase of Kantorowicz’s life seems drab and post-climatic by comparison. But in 1957, while at Princeton, Kantorowicz produced The King’s Two Bodies, his most significant work since his biography of Frederick II more than a quarter of a century earlier.  Using an “astonishing diversity of sources” (p.355), especially legal sources, Kantorowicz melded medieval theology with constitutional and legal history, political theory, and medieval ideas of kingship to generate a new vision of the Middle Ages. 

          Kantorowicz’s notion of the king having two bodies derived from a Tudor legal fiction that the king’s “body politic” is, in effect, immortal.  In The King’s Two Bodies, Kantorowicz found a link between the concept of undying corporations in English law and the notion of two bodies for the king.  Because England was endowed with a unique parliamentary system, Kantorowicz maintained that it was “only there that the fiction of the king never dying in the capacity of his ‘body politic’ was able to take shape” (p.351).  With new angles to legal history, political theory, and ideas of kingship, The King’s Two Bodies constitutes one of Kantorowicz’s “great historiographical triumphs” (p.355), as Lerner puts it. Appreciation for Kantorowicz’s last major — and most lasting — contribution to medieval scholarship continued to increase in the years after its initial publication.  

            Kantorowicz’s articles after The King’s Two Bodies revolved in different ways around the “close relationship between the divinity and the ruler, and about the vicissitudes of that relationship” (p.363).  In late 1962, he was diagnosed with an aortic aneurysm, yet  went about his affairs as if nothing had changed.  He “carried on earnestly with his dining and imbibing.  As usual he drank enough wine and spirits to wash an elephant” (p.376).  He died in Princeton of a ruptured aneurysm in September 1963 at age 68.

* * *

            Some readers may find that Lerner dwells excessively on academic politics – a dissection of the letters of recommendation on behalf of Kantorowicz’s candidacy for a position at Berkeley spans several pages, for example.  In addition, the paperback version is set in small type, making it an eye-straining experience and giving the impression that the subject matter is denser than it really is.  But undeterred readers, willing to plough through the book’s nearly 400 pages, should be gratified by its insights into a formidable scholar of medieval times as he lived through some of the most consequential moments of modern times.  As Lerner aptly concludes, given Kantorowicz’s remarkable life, a biography “could not be helped” (p.388).

Thomas H. Peebles

La Châtaigneraie, France

February 13, 2019

Advertisements

5 Comments

Filed under American Politics, Biography, European History, German History, History, Intellectual History, United States History

They Kept Us Out of War . . . Until They Didn’t

Michael Kazin, War Against War:

The American Fight for Peace, 1914-18 

            Earlier this month, Europe and much of the rest of the world paused briefly to observe the 100th anniversary of the day in 1918 when World War I, sill sometimes called the Great War, officially ended. In the United States, where we observe Veterans’ Day without explicit reference to World War I, this past November 11th constituted one of the rare occasions when the American public focused on the four-year conflict that took somewhere between 9 and 15 million lives, including approximately 116,000 Americans, and shaped indelibly the course of 20th century history.  In War Against War: The American Fight for Peace, 1914-18, Michael Kazin offers a contrarian perspective on American participation in the conflict.  Kazin, professor of history at Georgetown University and editor of the avowedly leftist periodical Dissent, recounts the history of the diverse groups and individuals in the United States who sought to keep their country out of the conflict when it broke out in 1914; and how those groups changed, evolved and reacted once the United States, under President Woodrow Wilson, went to war in April 1917.

            The opposition to World War I was, Kazin writes, the “largest, most diverse, and most sophisticated peace coalition to that point in U.S. history” (p.xi). It included pacifists, socialists, trade unionists, urban progressives, rural populists, segregationists, and crusaders for African-American rights.  Women, battling at the same time for the right to vote, were among the movement’s strongest driving forces, and the movement enjoyed support from both Democrats and Republicans.  Although the anti-war opposition had a decidedly anti-capitalist strain – many in the opposition saw the war as little more than an opportunity for large corporations to enrich themselves — a handful of well-known captains of American industry and finance supported the opposition, among them Andrew Carnegie, Solomon Guggenheim and Henry Ford.  It was a diverse and colorful collection of individuals, acting upon what Kazin describes as a “profoundly conservative” (p.xviii) impulse to oppose the build up of America’s military-industrial complex and the concomitant rise of the surveillance state.  Not until the Vietnam War did any war opposition movement approach the World War I peace coalition in size or influence.

            This eclectically diverse movement was in no sense isolationist, Kazin emphasizes. That pejorative term that had not yet come into popular usage.  Convinced that the United States had an important role to play on the world stage beyond its own borders, the anti-war coalition sought to create a “new global order based on cooperative relationships between nation states and their gradual disarmament” (p.xiv).  Its members hoped the United States would exert moral authority over the belligerents by staying above the fray and negotiating a peaceful end to the conflict.

             Kazin’s tells his story in large measure through admiring portraits of four key members of the anti-war coalition, each representing one of its major components: Morris Hillquit, a New York labor lawyer and a Jewish immigrant from Latvia, standard-bearer for the Socialist Party of America and left-wing trade unions; Crystal Eastman, a charismatic and eloquent New York feminist and labor activist, on behalf of women; and two legislative representatives, Congressman Claude Kitchen, a populist Democrat from North Carolina and an ardent segregationist; and Wisconsin Republican Senator Robert (“Fighting Bob”) LaFollette, Congress’ most visible progressive. The four disagreed on much, but they agreed that industrial corporations yielded too much power, and that the leaders of American industry and finance were “eager to use war and preparations for war to enhance their profits” (p.xiv).  Other well-known members of the coalition featured in Kazin’s story include Jane Addams, renowned social activist and feminist; William Jennings Bryan, Secretary of State under President Wilson, three-time presidential candidate, and Christian fundamentalist; and Eugene Debs and Norman Thomas, successively perennial presidential candidates of the Socialist Party of America.

            Kazin spends less time on the coalition’s opponents – those who had few qualms about entering the European conflict and, short of that, supported “preparedness” (always used with quotation marks): the notion that the United States needed to build up its land and naval capabilities and increase the size of its military personnel in the event that they might be needed for the conflict.  But those favoring intervention and “preparedness” found their voice in the outsized personality of former president Theodore Roosevelt, who mixed bellicose rhetoric with unadulterated animosity toward President Wilson, the man who had defeated him in a three-way race for the presidency in 1912.  After the United States declared war in April 1917, the former Rough Rider, then fifty-eight years old, sought to assemble his own volunteer unit and depart for the trenches of Europe as soon as the unit could be organized and trained.  To avoid this result, President Wilson was able to steer the Selective Service Act through Congress, establishing the national draft that Roosevelt had long favored – and Wilson had previously opposed.

             Kazin’s story necessarily turns around Wilson and his fraught relationship with the anti-war coalition. Stern, rigid, and frequently bewildering, Wilson was a firm opponent of United States involvement in the war when it broke out in 1914.  In the initial months of the conflict, Wilson gave the anti-war activists reason to think they had a sympathetic ear in the White House.  Wilson wanted the United States to stay neutral in the conflict so he could negotiate a lasting and just peace — an objective that the anti-war coalition fully endorsed.  He met frequently with peace groups and took care to praise their motives.  But throughout 1915, Wilson edged ever closer to the “preparedness” side. He left many on both sides confused about his intentions, probably deliberately so.  In Kazin’s interpretation, Wilson ultimately decided that he could be a more effective negotiator for a lasting and just peace if the United States entered the war rather than remained neutral. As the United States transitioned to belligerent, Wilson transformed from sympathizer with the anti-war coalition to its suppressor-in-chief. His transformation constitutes the most dramatic thread in Kazin’s story.

* * *

              The issue of shipping on the high seas precipitated the crisis with Germany that led Wilson to call for the United States’ entry into the war.  From the war’s outset, Britain had used its Royal Navy to prevent vessels from entering German ports, a clear violation of international law (prompting the quip that Britannia both “rules the waves and waives the rules” (p.25)).  Germany, with a far smaller naval force, retaliated by using its submarines to sink merchant ships headed for enemy ports.  The German sinking of the Cunard ocean liner RMS Lusitania off the coast of Ireland on May 7, 1915, killing more than 1,200 citizens, among them 128 Americans, constituted the beginning of the end for any real chance that the United States would remain neutral in the conflict.

            A discernible pro-intervention movement emerged in the aftermath of the sinking of the Lusitania, Kazin explains.  The move for “preparedness” was no longer just the cry of the furiously partisan or a small group of noisy hawks like Roosevelt.  A wide-ranging group suddenly supported intervention in Europe or, at a minimum, an army and navy equal to any of the belligerents.  Peace activists who had been urging their neutral government to mediate a settlement in the war “now faced a struggle to keep their nation from joining the fray” (p.62).

            After the sinking of the Lusitania, throughout 1916 and into the early months of 1917, “social workers and feminists, left-wing unionists and Socialists, pacifists and non- pacifists, and a vocal contingent of senators and congressmen from both major parties,” led by LaFollette and Kitchin, “worked together to stall or reverse the drive for a larger and more aggressive military” (p.63), Kazin writes.  The coalition benefited from the “eloquent assistance” of William Jennings Bryan, who had recently resigned as Secretary of State over Wilson’s refusal to criticize Britain’s embargo as well as Germany’s attacks on neutral vessels.

            In the aftermath of the sinking of the Lusitania, Wilson grappled with the issue of “how to maintain neutrality while allowing U.S. citizens to sail across the perilous Atlantic on British ships” (p.103).  Unlike the peace activists, Wilson “tempered his internationalist convictions with a desire to advance the nation’s power and status . . . As the crisis with Germany intensified, the idealism of the head of state inevitably clashed with that of citizens whose desire that America be right always mattered far more than any wish that it be mighty” (p.149).

            As events seemed to propel the United States closer to war in late 1916 and early 1917, the anti-war activists found themselves increasingly on the defensive.  They began to concentrate most of their energies on a single tactic: the demand for a popular referendum on whether the United States should go to war.  Although the idea gathered genuine momentum, there was a flagrant lack of support in Congress.  The activists never came up with a plausible argument why Congress should voluntarily give up or weaken its constitutional authority to declare war.

         In his campaign for re-election in 1916 against the Republican Party nominee, former Supreme Court Justice Charles Evans Hughes, Wilson ran as the “peace candidate,” dictated as much by necessity as desire.  “Few peace activists were ambivalent about the choice before them that fall,” Kazin writes.  “Whether as the lesser evil or a decent alterative, a second term seemed the only way to prevent Roosevelt . . . and [his] ilk from grabbing the reins of foreign policy” (p.124).  By September 1916, when Wilson left the White House for the campaign trail, he enjoyed the support of the “most left-wing, class-conscious coalition ever to unite behind a sitting president” (p.125).  Wilson eked out a narrow Electoral College victory in November over Hughes, with war opponents likely putting him over the top in three key states.

             Wilson’s re-election “liberated his mind and loosened his tongue” (p.141), as Kazin puts it.  In January 1917, he delivered to the United States Senate what came to be known as his “peace without victory” speech, in which he offered his vision for a “cooperative peace” that would “win the approval of mankind,” enforced by an international League of Peace. Borrowing from the anti-war coalition’s playbook, Wilson foreshadowed the famous 14 points that would became his basis for a peace settlement at the post-war 1919 Versailles Conference: no territorial gains, self-government and national self -determination for individual states, freedom of commerce on the seas, and a national military force for each state limited in size so as not to become an “instrument of aggression or of selfish violence” (p.141).  Wilson told the Senators that he was merely offering an extension of the United States’ own Monroe Doctrine.  But although he didn’t yet use the expression, Wilson was proposing nothing less than to make the world safe for democracy.  As such, Kazin notes, he was demanding “an end to the empires that, among them, ruled close to half the people of the world” (p.141).

           Wilson’s “stunning act of oratory” (p.142) earned the full support of the anti-war activists at home and many of their counterparts in Europe.  Most Republicans, by contrast, dismissed Wilson’s ideas as an “exercise in utopian thinking” (p.143). But, two months later, in March 1917, German U-boats sank three unarmed American vessels. This was the point of no return for Wilson, Kazin argues.  The president, who had “staked the nation’s honor and prosperity on protecting the ‘freedom of the seas,’ now believed he had no choice but to go to war” (p.172).  By this time, Wilson had concluded that a belligerent America could “end the conflict more quickly and, perhaps, spur ordinary Germans to topple their leaders, emulating their revolutionary counterparts in Russia.  Democratic nations, old and new, could then agree to the just and ‘cooperative’ peace Wilson had called for back in January.  By helping to win the war, the United States would succeed where neutrality had failed” (p.172).

* * *

           As the United States declared war on Germany in April 1917 (it never declared war on Germany’s allies Austria-Hungary and Turkey), it also seemed to have declared war on the anti-war coalition  and anyone else who questioned the United States’ role in the conflict.  The Wilson administration quickly turned much of the private sector into an appendage of the state, concentrating power to an unprecedented degree in the national government in Washington.  It persecuted and prosecuted opponents of the war effort with a ferocity few in the anti-war movement could have anticipated. “In no previous war had there been so much repression, legal and otherwise” (p.188), Kazin writes.  The Wilson administration, its allies in Congress and the judiciary all embraced the view that critics of the war had to “stay silent or suffer for their dissent” (p.189).  Wilson gave a speech in June 1917 in which he all but equated opposition with treason.

          The next day, Wilson signed into law the Espionage Act of 1917, designed to prohibit interference with military operations or recruitment as well as any support of the enemies of the United States during wartime.  The following year, Congress passed the even more draconian Sedition Act of 1918, which criminalized “disloyal, profane, scurrilous, or abusive language” about the government, the flag, or the “uniform of the armed forces” (p.246). The apparatus for repressing “disloyalty” had become “one tentacle of the newly potent Leviathan” (p.192).

            Kazin provides harrowing examples of the application of the Sedition Act.  A recent immigrant from Germany received a ten-year sentence for cursing Theodore Roosevelt and cheering a Germany victory on the battlefield.   Another served time for expressing his view that the conflict was a “rich man’s war and the United States is simply fighting for the money” (p.245); still another was prosecuted and jailed for charging that the United States Army was a “God damned legalized murder machine” (p.245).  Socialist Party and labor leader Eugene Debs received a ten-year sentence for telling party members – at a union picnic, no less – that their voices had not been heard in the decision to declare war.  The administration was unable to explain how repression of these relatively mild anti-war sentiments was helping to make the world safe for democracy.

            Many in the anti-war coalition, understandably, fell into line or fell silent, fearing that they would be punished for “refusing to change their minds” (p.xi). Most activists understood that, as long as the conflict continued, “resisting it would probably yield them more hardships than victories” (p.193).  Those continuing in the shrunken anti-war movement felt compelled to “defend themselves constantly against charges of disloyalty or outright treason” (p.243).  They fought to “reconcile their fear and disgust at the government’s repression with a hope that Wilson might still embrace a ‘peace without victory,’ even as masses of American troops made their way to France and into battle” (p.243).

           Representative Kitchin and Senator La Follette, the two men who had spearheaded opposition to the war in Congress, refrained from expressing doubts publicly about the war effort.  Kitchin, chairman at the time of the House of Representatives’ powerful Ways and Means Committee, nonetheless structured a revenue bill to finance the war by placing the primary burden on corporations that had made “excess profits” (p.244) from military contracts.  La Follette was forced to leave the Senate in early 1918 to care for his ill son, removing him from the storm that would have ensued had he continued to espouse his unwavering anti-war views.  Female activist Crystal Eastman helped create the National Civil Liberties Bureau, a predecessor to the American Civil Liberties Union, and started a new radical journal, the Liberator, after the government prohibited a previous publication from using the mails.  Socialist Morris Hilquit, like La Follette, was able to stay out of the line of fire in 1918 when he contracted tuberculosis and was forced out of New York City and into convalesce in the Adirondack Mountains, 300 miles to the north.

           Although the United States was formally at war with Germany for the last 19 months of a war that lasted over four years, given the time needed to raise and train battle ready troops it was a presence on the battlefield for only six months.  The tardy arrival of Americans on the killing fields of Europe was, Kazin argues, “in part, an ironic tribute to the success of the peace coalition in the United States during the neutral years” (p.260-61).  Hundreds of thousands of Americans would likely have been fighting in France by the summer of 1917 if Theodore Roosevelt and his colleagues and allies had won the fight over “preparedness” in 1915 and 1916.  “But the working alliance between radical pacifists like Crystal Eastman and progressive foes of the military like La Follette severely limited what the advocates of a European-style force could achieve – before Woodrow Wilson shed his own ambivalence and resolved that Americans had to sacrifice to advance self-government abroad and preserve the nation’s honor” (p.260-61).

          * * *

          Kazin’s energetic yet judicious work sheds valuable light on the diverse groups that steadfastly followed an alternate route for advancing self-government abroad – making the world safe for democracy — and preserving their nation’s honor.  As American attention to the Great War recedes in the aftermath of this month’s November 11th remembrances, Kazin’s work remains a timely reminder of the divisiveness of the conflict.

Thomas H. Peebles

La Châtaigneraie, France

November 16, 2018

 

13 Comments

Filed under American Politics, European History, History, United States History

Magic Moscow Moment

 

Stuart Isacoff, When the World Stopped to Listen:

Van Cliburn’s Cold War Triumph and Its Aftermath 

            Harvey Lavan Cliburn, Jr., known to the world as “Van,” was the pianist from Texas who at age 23 astounded the world when he won the first Tchaikovsky International Piano Competition in Moscow in 1958, at the height of the Cold War.  The Soviet Union, fresh from launching the satellite Sputnik into orbit the previous year and thereby gaining an edge on the Americans in worldwide technological competition, looked at the Tchaikovsky Competition as opportunity to showcase its cultural superiority over the United States.  Stuart Isacoff’s When the World Stopped to Listen: Van Cliburn’s Cold War Triumph and Its Aftermath takes us behind the scenes of the 1958 competition to show the machinations that led to Cliburn’s selection in Moscow.

            They are intriguing, but come down to this: the young Cliburn was so impossibly talented, so far above his fellow competitors, that the competition’s jurors concluded that they had no choice but to award him the prize.  But before the jurors announced what might have been considered a politically incorrect decision to give the award to an American, they felt compelled to present their dilemma to Soviet party leader and premier Nikita Khrushchev. Considered, unfairly perhaps, a country bumpkin lacking cultural sophistication, Khrushchev asked who had been the best performer.  The answer was Cliburn.  According to the official Soviet version, Khrushchev responded with a simple, straightforward directive: “Then give him the prize” (p.156).

            Isacoff, a professional pianist as well as an accomplished writer, suggests that there was more to Khrushchev’s directive than what the official version allows.  But his response and the official announcement two days later, on April 14, 1958, that Cliburn had won first place make an endearing high point to Isacoff’s spirited biography.  The competition in Moscow and its immediate aftermath form the book’s core, about 60%. Here, Isacoff shows how Cliburn became a personality known worldwide — “the classical Elvis” and “the American Sputnik” were just two of the monikers given to him – and how his victory contributed appreciably to a thaw in Cold War tensions between the United States and the Soviet Union. The remaining 40% of the book is split roughly evenly between Cliburn’s life prior to the Moscow competition, as a child prodigy growing up in Texas and his ascendant entry into the world of competitive piano playing; and his post-Moscow life, fairly described as descendant.

            Cliburn never recaptured the glory of his 1958 moment in Moscow, and his life after receiving the Moscow prize was a slow but steady decline, up to his death from bone cancer in 2013.  For the lanky, enigmatic Texan, Isacoff writes, “triumph and decline were inextricably joined” (p.8).

* * *

            Cliburn was born in 1934, in Shreveport, Louisiana, the only child of Harvey Lavan Cliburn, Sr., and Rildia Bee O’Bryan Cliburn.  When he was six, he moved with his parents from Shreveport to the East Texas town of Kilgore.  Despite spending his earliest years in Louisiana, Cliburn always considered himself a Texan, with Kilgore his hometown.   Cliburn’s father worked for Magnolia Oil Company, which had relocated him from Shreveport to Kilgore, a rough-and-tumble oil “company town.”  We learn little about the senior Cliburn in this biography, but mother Rildia Bee is everywhere. She was a dominating presence upon her son not only in his youthful years but also throughout his adult life, up to her death in 1994 at age 97.

        Prior to her marriage, Rildia had been a pupil of renowned pianist Arthur Friedheim.  It was Southern mores, Isacoff suggests, that discouraged her from pursuing what looked like her own promising career as a pianist.  But with the arrival of her son, she found a new outlet for her seemingly limitless musical energies.  Rildia was “more teacher than nurturer” (p.12), Isacoff writes, bringing discipline and structure to her son, who had started playing the piano around age 3.  From the start, the “sonority of the piano was entwined with his feelings for his mother” (p.12).  By age 12, Cliburn had won a statewide piano contest, and had played with the Houston Symphony Orchestra in a radio concert.  In adolescence, with his father fading in importance, Cliburn’s mother continued to dominate his life. “Despite small signs of teenage waywardness, when it came to his mother, Van was forever smitten” (p.21).

               In 1951, at age 17, Rildia and Harvey Sr., sent their son off to New York to study at the prestigious Juilliard School, a training ground for future leaders in music and dance.  There, he became a student of the other woman in his life, Ukraine-born Rosina Lhévinne, a gold-medal graduate of the Moscow Conservatory whose late husband Josef had been considered one of the world’s greatest pianists.  Like Rildia, Lhévinne too was a force of nature, a powerful influence on the young Cliburn.  Improbably, Lhévinne and Rildia for the most part saw eye to eye on the best way to nurture the talents of the prodigious young man.  Both women focused Cliburn on “technical finesse and beauty of sound rather than on musical structure,” realizing that his best qualities as a pianist “rested on surface polish and emotional persuasiveness” (p.54).  Each recognized that for Cliburn, music would always be “visceral, not abstract or academic.  He played the way he did because he felt it in the core of his being” (p.34).

           More than Rildia, Lhévinne was able to show Cliburn how to moderate and channel these innate qualities.  Without her stringent guidance, Isacoff indicates, Cliburn might have lapsed into “sentimentality, deteriorating into the pianistic mannerisms of a high romantic” (p.56). Although learning through Lhévinne to hold his interpretative flourishes in check, Cliburn’s “overriding personality – emotionally exuberant, and unshakably sentimental – was still present in every bar” (p.121).  By the time he left for the Moscow competition, Cliburn had demonstrated a “natural ability to grasp and convey the meaning of the music, to animate the virtual world that arises through the art’s subtle symbolic gestures. It set him apart” (p.18).

          During his Julliard years in New York, the adult Cliburn personality the world would soon know came into view: courteous and generous, sentimental and emotional.  He had by then also developed the idiosyncratic habit of being late for just about everything, a habit that continued throughout his life.  Isacoff mentions one concert after another in which Cliburn was late by periods that often became a matter of hours.  Both in the United States and abroad, he regularly compensated for showing up late by beginning with America’s national anthem, “The Star Spangled Banner.”  At Juilliard, Cliburn also began a smoking habit that stayed with him for the remainder of his life.  Except when he was actually playing — when he had the habit of looking upward, “as if communing with the heavens whenever the music reached an emotional peak” (p.6) — it was difficult to get a photo of him without a cigarette in his hands or mouth.

           It may have been at Juilliard that Clliburn had his first homosexual relationship, although Isacoff plays down this aspect of Cliburn’s early life.  He mentions Cliburn’s experience in high school dating a girl and attending the senior prom.  Then, a few pages later, he notes matter-of-factly that a friendship with a fellow male Juilliard student had “blossomed into romance” (p.35).  But there are many questions about Cliburn’s sexuality that seem pertinent to our understanding of the man.  Did Cliburn undergo any of the torment that often accompanies the realization in adolescence that one is gay, especially in the 1950s?  Did he “come out” to his friends and acquaintances, in Texas or New York, or did he live the homosexual life largely in the closest?  Were his parents aware of his sexual identity and if so, what was their reaction?  None of these is treated here.

            With little fanfare, Juilliard nominated Cliburn in early 1958 for the initial Tchaikovsky International Competition, taking advantage of an offer of the Rockefeller Foundation to pay travel expenses for one entrant in each of the competition’s two components, piano and violin.  The Soviet Union, which paid the remaining expenses for the participants, envisioned a “high-culture version of the World Cup, pitting musical talents from around the globe against one another” (p.4). The Soviets confidently assumed that showcasing its violin and piano expertise after its technological success the previous year with the Sputnik launch would provide another propaganda victory over the United States.

            Soviet pianists who wished to enter the competition had to pass a daunting series of tests, musical and political, to qualify for the competition, with training similar to that of its Olympic athletes.  Many of the Soviet Union’s emerging piano stars were reluctant to jump into the fray.  Each had a specific reason, along with a “general reluctance to become involved in the political machinations of the event” (p.59).  Lev Vlassenko, a “solid, well-respected pianist” who became a lifetime friend of Cliburn in the aftermath of the competition, emerged as the presumptive favorite, “clearly destined to win” (p.60).

            On the American side, the US State Department only reluctantly gave its approval to the competition, fearing that it would be rigged.  The two pianists whom the Soviets considered the most talented Americans, Jerome Lowenthal and Daniel Pollack, traveled to Moscow at their own expense, unlike Cliburn (pop singer Neil Sedaka was among the competitors for the US but was barred by the Soviets as too closely associated with decadent rock ‘n roll; they undoubtedly did Sedaka a favor, as his more lucrative pop career was just then beginning to take off).  Other major competitors came from France, Iceland, Bulgaria, and China.

            For the competition’s first round, Cliburn was assigned pieces from Bach, Mozart, Chopin, Scriabin, Rachmaninoff, Liszt and Tchaikovsky.  The audience at the renowned Moscow Conservatory, where the competition took place, fell from the beginning for the Texan and his luxurious sound. They “swooned at the crooner in him . . . Some said they discerned in his playing a ‘Russian soul’” (p.121).  But among the jurors, who carried both political and aesthetic responsibilities, reaction to Cliburn’s first round was mixed.  Some were underwhelmed with his renditions of Mozart and Bach, but all found his Tchaikovsky and Rachmaninoff “out of this world,” as one juror put it (p.120).

          Isacoff likens the jurors’ deliberations to a round of speed dating, “where the sensitive antennae of the panelists hone in on the character traits of each candidate. . . There is no magical formula for choosing a winner; in the end, the decision is usually distilled down to a basic overriding question: Do I want to hear this performer again?”(p.117).  Famed pianist Sviatoslav Richter, who served on the jury, emerges here as the equivalent of the “hold out juror” in an American criminal trial, “willing to create a serious ruckus when he felt that the deck was being stacked against the American.  As the competition progressed, his fireworks in the jury room would be every bit the equal of the ones onstage” (p.114).

            Cliburn’s second round program was designed to show range.  Beethoven, Chopin and Brahms were the heart of a romantic repertoire.  He also played the Prokofiev Sixth, a modernist piece that reflected the political tensions and fears of 1940 Russia.  Cliburn received a 15-minute standing ovation at the end of the round, the audience voting literally with its feet and hands.  In the jury room, Richter continued to press the case for Cliburn, although the jury ranked him only third, tied with Soviet pianist Naum Shtarkman. Overall, Vlassenko ranked first and eminent Chinese pianist Shikun Liu second.

            But in the third round, Cliburn blew the competition away.  The round  began with Tchaikovsky’s First Piano Concerto, for which Cliburn delivered an “extraordinary” interpretation, with every tone “imbued with an inner glow, with long phrases concluding in an emphatic, edgy pounce. The effect was simply breathtaking” (p.146). Cliburn’s rendition of Rachmaninoff’s “treacherously difficult” (p.147) Piano Concerto no. 3 was even more powerful.  In prose that strains to capture Cliburn’s unique brilliance, Isacoff explains:

After Van, people would never again hear this music the same way. . . There is no simple explanation for why in that moment Van played perhaps the best concert of his life. Sometimes a performer experiences an instant of artistic grace, when heaven seems to open up and hold him in the palm of its hand – when the swirl of worldly sensations gives way to a pervasive, knowing stillness, and he feels connected to life’s unbroken dance.  If that was not exactly Van’s experience when playing Rachmaninoff Concerto no. 3, it must have come close (p.146-47).

         Cliburn had finally won over even the most recalcitrant jurors, who briefly considered a compromise in which Cliburn and Vlassenko would share the top prize.  But the final determination was left to premier Khrushchev.  The Soviet leader’s instantaneous and decisively simple response quoted above was the version released to the press.  But with the violin component of the competition going overwhelmingly to the Soviets, the ever-shrewd Khrushchev appears to have concluded that awarding the piano prize to the American would underscore the competition’s objectivity and fairness.  One advisor recalled Khrushchev saying to her: “The future success of this competition lies in one thing: the justice that the jury gives” (p.156).  The jury’s official and public decision of April 14, 1958 had Cliburn in first place, with Vlassenko and Liu sharing second.  Cliburn could not have accomplished what he did, Isacoff writes, without Khrushchev, his “willing partner in the Kremlin” (p.206).

        Cliburn had another willing partner in Max Frankel, then the Moscow correspondent for the New York Times (and later, its Executive Editor). Frankel had sensed a good story during the competition and reported extensively on all its aspects.  He also pushed his editors back home to put his dispatches on page 1.  One of his stories forthrightly raised the question whether the Soviets would have the courage to award the prize to Cliburn.  For Isacoff, Frankel’s reporting and the pressure he exerted on his Times editors to give it a prominent place also contributed to the final decision.

             After his victory in Moscow, Cliburn went on an extensive tour within the Soviet Union. To the adoring Russians, Cliburn represented the “new face of freedom.” Performing under the auspices of a repressive regime, he “seemed to answer to no authority other than the shifting tides of his own soul” (p.8).  Naïve and politically unsophisticated, Cliburn raised concerns at the State Department when he developed the habit of describing the Russians as “my people,” comparing them to Texans and telling them that he had never felt so at home anywhere else.

          A month after the Moscow victory, Cliburn returned triumphantly to the United States amidst a frenzy that approached what he had generated in the Soviet Union.  He became the first (and, as of now, only) classical musician to be accorded a ticker tape parade in New York City, in no small measure because of lobbying by the New York Times, which saw the parade as vindication for its extensive coverage of the competition.

          After Cliburn’s Moscow award, the Soviet Union and the United States agreed to host each other’s major exhibitions in the summer of 1959.  It started to seem, Isacoff writes, that “after years of protracted wrangling, a period of true detente might actually be dawning” (p.174).   The cultural attaché at the American Embassy in Moscow wrote that Cliburn had become a “symbol of the unifying friendship that overcomes old rivalries.  . . a symbol of art and humanity overruling political pragmatics” (p.206).

           A genuine if improbable bond of affection had developed in Moscow between Khrushchev and Cliburn. That bond endured after Cold War relations took several turns for the worse, first after the Soviets shot down the American U-2 spy plane in 1960, followed by erection of the Berlin Wall in 1961, and the direct confrontation in 1962 over Soviet placement of missiles in Cuba. The bond even continued after Khrushchev’s fall from power in 1964, indicating that it had some basis beyond political expediency.

           But Cliburn’s post-Moscow career failed to recapture the magic of his spring 1958 moment.  The post-Moscow Cliburn seemed to be beleaguered by self-doubt and burdened by psychological tribulations that are not fully explained here.  “Everyone had expected Van’s earlier, youthful qualities to mature and deepen over time,” Isacoff writes.  But he “never seemed to grow into the old master they had hoped for . . . At home, critics increasingly accused Van of staleness, and concluded he was chasing after momentary success with too little interest in artistic growth” (p.223).  Even in the Soviet Union, where he made several return visits, critics “began to complain of an artistic decline” (p.222).  In these years, Cliburn “developed an enduring fascination with psychic phenomena and astrology that eventually grew into an obsession. The world of stargazing became a vital part of his life” (p.53).

           Cliburn’s mother remained a dominant force in his life throughout his post-Moscow years, serving as his manager until she was almost 80 years old.  As she edged toward 90, she and her son continued to address one another as “Little Precious” and “Little Darling” (p.230).  Her death at age 97 in 1994 was predictably devastating for Cliburn. In musing about his mother’s effect on Cliburn’s career trajectory, Isacoff wonders whether Rildia Bee, the “wind that filled his sails” might also have been the “albatross that sunk him” (p.243).  While many thought that Cliburn might collapse with the death of his mother, by this time he was in a relationship with Tommy Smith, a music student 29 years younger.  With Smith, Cliburn had “at last found a fulfilling, loving union” (p.242). Smith traveled regularly with Cliburn, even accompanying him to Moscow in 2004, where none other than Vladimir Putin presented Cliburn with a friendship award.  Smith was at Cliburn’s side throughout his battle with bone cancer, which took the pianist’s life in 2013 at age 79.

* * *

            Tommy Smith became the happy ending to Cliburn’s uneven life story — a story which for Isacoff resembles that of a tragic Greek hero who “rose to mythical heights in an extraordinary victory that proved only fleeting, before the gods of fortune exacted their price” (p.8).

Thomas H. Peebles

La Châtaigneraie, France

September 5, 2018

 

1 Comment

Filed under American Politics, History, Music, Soviet Union, United States History

Inside Both Sides of Regime Change in Iraq

 

John Nixon, Debriefing the President:

The Interrogation of Saddam Hussein 

          When Saddam Hussein was captured in Iraq in December 2003, it marked only the second time in the post-World War II era in which the United States had detained and questioned a deposed head of state, the first being Panama’s Manuel Noriega in 1989.  On an American base near Baghdad, CIA intelligence analyst John Nixon led the initial round of questioning of Saddam in December 2003 and January 2004.  In the first two-thirds of Debriefing the President: The Interrogation of Saddam Hussein, Nixon shares some of the insights he gained from his round of questioning  — insights about Saddam himself, his rule, and the consequences of removing him from power.

        Upon return to the United States, Nixon became a regular at meetings on Iraq at the White House and National Security Council, including several with President George W. Bush.   The book’s final third contains Nixon’s account of these meetings, which continued up to the end of the Bush administration. In this portion of the book, Nixon also reflects upon the role of CIA intelligence analysis in the formulation of foreign policy.  Nixon is one of the few individuals — maybe the only individual — who had extensive exposure both to Saddam and to those who drove the decision to remove him from power in 2003.  Nixon thus offers readers of this compact volume a formidable inside perspective on Saddam’s regime and the US mission to change it.

         But while working through Nixon’s account of his meetings with Saddam, I was puzzled by his title, “Debriefing the President,” asking myself, which president? Saddam Hussein had held the title of President of the Republic of Iraq and continued to refer to himself as president after he had been deposed, clinging tenaciously to the idea that he was still head of the Iraqi state. So does the “president” in the title refer to Saddam Hussein or George W. Bush? With the first two-thirds of the book detailing Nixon’s discussions with Saddam, I began to think that the reference was to the former Iraqi leader, which struck me as oddly respectful of a brutal tyrant and war criminal.  But this ambiguity may be Nixon’s way of highlighting one of his major objectives in writing this book.

          Nixon seeks to provide the reading public with a fuller and more nuanced portrait of Saddam Hussein than that which animated US policymakers and prevailed in the media at the time of the US intervention in Iraq, which began fifteen years ago next month.  By detailing the content of his meetings with Saddam to the extent possible – the book contains numerous passages blacked out by CIA censors — Nixon hopes to reveal the man in all his twisted complexity. He recognizes that Saddam killed hundreds of thousands of his own people, launched a fruitless war with Iran and used chemical weapons without compunction.  He “took a proud and very advanced society and ground it into dirt through his misrule” (p.12), Nixon writes, and thus deserves the sobriquet “Butcher of Baghdad.”  But while “tyrant,” “war criminal” and “Butcher of Baghdad” can be useful starting points in understanding Saddam, Nixon seems to be saying, they should not be the end point. “It is vital to know who this man was and what motivated him.  We will surely see his likes again” in the Middle East (p.9), he writes.

          When Nixon returned to the United States after his interviews with Saddam, he was surprised that none of the high-level policy makers he met with seemed interested in the question whether the United States should have removed Saddam from power.  Nixon addresses this question in his final pages with a straightforward and unsparing answer: regime change was a catastrophe for both Iraq and the United States.

* * *

           Nixon began his career as a CIA analyst at in 1998.  Working at CIA Headquarters in Virginia, he became a “leadership analyst” on Iraq, responsible for developing information on Saddam Hussein: “the family connections that helped keep him in power, his tribal ties, his motives and methods, everything that made him tick. It was like putting together a giant jigsaw puzzle with small but important pieces gleaned from clandestine reporting and electronic intercepts” (p.38).  In October 2003, roughly five months after President Bush had famously declared “mission accomplished” in Iraq, Nixon was sent from headquarters to Baghdad.  There, he helped CIA operatives and Army Special Forces target individuals for capture.  At the top of the list was HVT-1, High Value Target Number 1, Saddam Hussein.

           After Saddam was captured in December 2003 at the same farm near his hometown of Tikrit where he had taken refuge in 1959 after a bungled assassination attempt upon the Iraqi prime minister, Nixon confirmed Saddam’s identity.  US officials had assumed that Saddam would “kill himself rather than be captured, or be killed as he tried to escape. When he was captured alive, no one knew what to do” (p.76).  Nixon was surprised that the CIA became the first US agency to meet with Saddam. His team had little time to prepare or coordinate with other agencies with an interest in information from Saddam, particularly the Defense Department and the FBI.  “Everything had to be done on the fly.  We learned a lot from Saddam, but we could have learned a lot more” (p.84-85).

          Nixon’s instructions from Washington were that no coercive techniques were to be used during the meetings.  Saddam was treated, Nixon indicates, in “exemplary fashion – far better than he treated his old enemies.  He got three meals a day.  He was given a Koran and an Arabic translation of the Geneva conventions. He was allowed to pray five times each day according to his Islamic faith” (p.110).   But Nixon and his colleagues had few carrots to offer Saddam in return for his cooperation. Their position was unlike that of a prosecutor who could ask a judge for leniency in sentencing in exchange for cooperation.  Nixon told Saddam that the meetings were “his chance, once and for all, to set the record straight and tell the world who he was” (p.83).  Gradually, Nixon and his colleagues buitl a measure of rapport with Saddam, who clearly enjoyed the meetings as a break from the boredom of captivity.

          Saddam, Nixon found, had  “great charisma” and “an outsize presence. Even as a prisoner who was certain to be executed, he exuded an air of importance” (p.81-82).  He was “remarkably forthright when it suited his purposes. When he felt he was in the clear or had nothing to hide, he spoke freely. He provided interesting insights into the Ba’ath party and his early years, for example. But we spent most of our time chipping away at layers of defense meant to stymie or deceive us, particularly about areas such as his life history, human rights abuse, and WMD, to name just a few” (p.71-72).

         Saddam saw himself as the “personification of Iraq’s greatness and a symbol of its evolution into a modern state,” with a “grand idea of how he fit into Iraq’s history” (p.86).  He was “always answering questions with questions of history, and he would frequently demand to known why we had asked about a certain topic before he would give his answer” (p.100). He often feigned ignorance to test his interrogators knowledge.  He frequently began his answers “by going back to the rule of Saladin.”  Nixon   “often wondered afterward how many people told Saddam Hussein to keep it brief and lived to tell about it” (p.100).

       The meetings revealed to Nixon and his colleagues that the United States had seriously underestimated the degree to which Saddam saw himself as buffeted between his Shia opponents and their Iranian backers on one side, and Sunni extremists such as al-Quada on the other.  Saddam, himself a Sunni who became more religious in the latter stages of his life, could not hide his enmity for Shiite Iran.  He saw Iraq as the “first line of Arab defense against the Persians of Iran and as a Sunni bulwark against its overwhelmingly Shia population” (p.4).  But Saddam considered Sunni fundamentalism to be an even greater threat to his regime than Iraq’s majority Shiites or Iran.

       What made the Sunni fundamentalists, the Wahhabis, so threatening was that they “came from his own Sunni base of support. They would be difficult to root out without alienating the Iraqi tribes, and they could rely on a steady stream of financial support from Saudi Arabia. If the Wahhabists were free to spread their ideology, then his power base would rot from within” (p.124).  Saddam seemed genuinely mystified by the United States’ intervention in Iraq. He considered himself an implacable foe of Islamic extremism, and felt that the 9/11 attacks should have brought his country and the United States closer together.  Moreover, as he mentioned frequently, the United States had supported his country during the Iran-Iraq war.

          The meetings with Saddam further confirmed that in the years leading up to the United States intervention, he had begun to disengage from ruling the country.  At the time hostilities began, he had delegated much of the running of the government to subordinates and was mainly occupied with nongovernmental pursuits, including writing a novel.  Saddam in the winter of 2003 was “not a man bracing for a pulverizing military attack” (p.46), Nixon writes.  In all the sessions, Saddam “never accepted guilt for any of the crimes he was accused of committing, and he frequently responded to questions about human rights abuses by telling us to talk with the commander who had been on the scene” (p.129).

          On the eve of the 1991 Gulf War, President George H.W. Bush had likened Saddam to Hitler, and the idea took hold in the larger American public. But not once during the interviews did Saddam say he admired either Hitler or Stalin.  When Nixon asked which world leaders he most admired, Saddam said de Gaulle, Lenin, Mao and George Washington, because they were founders of political systems and thinkers.  Nixon quotes Saddam as saying, “Stalin does not interest me. He was not a thinker. For me, if a person is not a thinker, I lose interest” (p.165).

          When Nixon told Saddam that he was leaving Iraq to return to Washington, Saddam gave him a firm handshake and told Nixon to be just and fair to him back home.  Nearly three years later, in December 2006, Saddam was put to death by hanging in a “rushed execution in a dark basement” in an Iraqi Ministry (p.270), after the United States caved to Iraqi pressure and turned him over to what turned out to be little more than a Shiite lynch mob.  Nixon concludes that Saddam’s unseemly execution signaled the final collapse of the American mission in Iraq.  Saddam, Nixon writes, was:

not a likeable guy. The more you got to know him, the less you liked him. He had committed horrible crimes against humanity.  But we had come to Iraq saying that we would make things better.  We would bring democracy and the rule of law.  No longer would people be awakened by a threatening knock on the door.  And here we were, allowing Saddam to be hanged in the middle of the night (p.270).

* * *

            Nixon’s experiences with Saddam made him a familiar face at the White House and National Security Council when he returned to the United States in early 2004.  His meetings with President Bush convinced him that Bush never came close to getting a handle on the complexities of the Middle East.  After more than seven years in office, the president “still didn’t understand the region and the fallout from the invasion” (p.212). In Nixon’s view, Bush’s decision to take the country into war was largely because of the purported attempt Saddam had made on his father’s life  in the aftermath of the first Gulf War – a “misguided belief” in Nixon’s view.  The younger Bush and his entourage ordered the invasion of a country “without the slightest clue about the people they would be attacking. Even after Saddam’s capture, the White House was only looking for information that supported its decision to go to war” (p.235).

          One of the ironies of the Iraq War, Nixon contends, was that Saddam Hussein and George W. Bush were alike in many ways:

Both had haughty, imperious demeanors.  Both were fairly ignorant of the outside world and had rarely traveled abroad.  Both tended to see things as black and white, good and bad, for and against, and became uncomfortable when presented with multiple alternatives. Both surrounded themselves with compliant advisers and had little tolerance for dissent. Both prized unanimity, at least when it coalesced behind their own views. Both distrusted expert opinion (p.240).

       Nixon is almost as tough on the rest of the team that surrounded Bush and contributed to the decision to go to war, although he found Vice President Dick Chaney to be a source of caution, providing a measure of good sense to discussions.  Chaney was “professional, dignified, and considerate . . . an attentive listener” (p.197-98).  But he is sharply critical of the CIA Director at the time, George Tenet (even while refraining from mentioning the remark most frequently attributed to his former boss, that the answer to the question whether Saddam was stockpiling weapons of mass destruction was a “slam dunk”).

         In Nixon’s view, Tenet transformed the agency’s intelligence gathering function from one of neutral fact-finding, laying out the best factual assessment possible in a given situation, into an agency whose role was to serve up intelligence reports tailored to support the administration’s positions.  Tenet was “too eager to please the White House.  He encouraged analysts to punch up their reports even when the evidence was flimsy, and he surrounded himself with yes men” (p.225).  Nixon recounts how, prior to the 2003 invasion, the line level Iraq team at the CIA was given three hours to respond to a paper prepared by another agency purporting to show a connection between Saddam’s regime and the 9/11 attacks — a paper the team found “full of holes, inaccuracies, sloppy reporting and pie-in-the-sky analysis” (p.229).  Line level analysts drafted a dissenting note, but its objections were “gutted” by CIA leadership (p.230) and the faulty paper went on to serve as an important basis to justify the invasion of Iraq.

          Nixon left the agency in 2011. But in the latter portion of his book he delivers his fair share of parting shots at the post-Iraq CIA, which has become in his view a “sclerotic organization” (p.256) that “badly needs fixing” (p.261).  The agency’s leadership needs to “stop fostering the notion that the CIA is omniscient” and the broader foreign policy community needs to recognize that intelligence analysts can provide “only information and insights, and can’t serve as a crystal ball to predict the future” (p.261).  But as Nixon fires shots at his former agency, he lauds the line level CIA analysts with whom he worked. The analysts represent the “best and the brightest our country has to offer . . . The American people are well served, and their tax dollars well spent, by employing such exemplary public servants. I can actually say about these folks, ’Where do we get such people?’ and not mean it sarcastically” (p.273-74).

* * *

         Was Saddam worth removing from power, Nixon asks in his conclusion. “As of this writing, I see only negative consequences for the United States from Saddam’s overthrow” (p.257).  No serious Middle East analyst believes that Iraq was a threat to the United States, he argues.  The United States spent trillions of dollars and wasted the lives of thousands of its military men and women “only to end up with a country that is infinitely more chaotic than Saddam’s Ba’athist Iraq” (p.258).  The United States could have avoided this chaos, which has given rise to ISIS and other forms of Islamic extremism, “had it been willing to live with an aging and disengaged Saddam Hussein”(1-2).  Nixon’s conclusion, informed by his opportunity to probe the mindset of both Saddam Hussein and those who determined to remove him from power, rings true today and stings sharply.

Thomas H. Peebles

La Châtaigneraie, France

January 31, 2018

 

 

 

 

5 Comments

Filed under American Politics, Middle Eastern History, United States History

Minding Our Public Language

Mark Thompson, Enough Said:

What’s Gone Wrong With the Language of Politics 

          In Enough Said: What’s Gone Wrong with the Language of Politics, Mark Thompson examines the role which “public language” — the language we use “when we discuss politics and policy, or make our case in court, or try to persuade anyone of anything else in a public context” (p.2) — plays in today’s cacophonous political debates.  Thompson, currently Chief Executive Officer of The New York Times and before that General Director of the BBC, contends that there is a crisis in contemporary democratic decision-making today that at heart is a crisis of political language.  Public language appears to be losing its power to explain and engage, thereby threatening the bond between people and politicians. “Intolerance and illiberalism are on the rise almost everywhere,” Thompson writes, and the way our public language has changed is an “important contributing and exacerbating factor” (p.297-98).

          Thompson seeks to revive the formal study of rhetoric as a means to understand and even reverse the contemporary crisis of public language.  Rhetoric is simply the “study of the theory and practice of public language” (p.2).  Rhetoric “helps us to make sense of the world and to share that understanding. It also teaches us to ‘pay heed’ to the ‘opposite side,’ the other” (p.361). Democracies need public debate and therefore competition in the mastery of public persuasion. Rhetoric, the language of explanation and persuasion, enables collective decision-making to take place.

        Across the book’s disparate parts, Thompson’s central concern is today’s angry and polarized political climate often referred to as “populist,” in which the word “compromise” has become pejorative, the adjective “uncompromising” is a compliment, and the “public presumption of good faith between opposing parties and factions” (p.97) seems to have largely evaporated.  Thompson recognizes that the current populist wave is founded upon a severe distrust of elites.  Given his highest-of-high-level positions at the BBC and The New York Times (along with a degree from Oxford University), Thompson is about as elite as one can become.  He thus observes from the top of a besieged citadel.  Unsurprisingly, Thompson brings a well-informed Anglo-American perspective to his observations, and he shines in pointing to commonalities as well as differences between Great Britain and the United States. There are occasional glances at continental Europe and elsewhere – Silvio Berlusconi’s rhetorical skills are examined, for example – but for the most part this is an analysis of public language at work in contemporary Britain and the United States.

          In the book’s first half, Thompson uses the terminology of classical rhetoric to frame an examination of what he considers the root causes of today’s crisis in public language. Among them are the impact of social media on political discourse and how the pervasive use of sales and marketing language has devalued public debate.  Social media platforms such as Facebook and Twitter have given rise to a “Darwinian natural selection of words and phrases,” he writes, in which, “by definition, the only kind of language that emerges from this process is language that works. You hear it, you get it, you pass it on. The art of persuasion, once the grandest of the humanities and accessible at its highest level only to those of genius – a Demosthenes or a Cicero, a Lincoln or a Churchill – is acquiring many of the attributes of a computational science. Rhetoric not as art, but as algorithm” (p.187).  The use of language associated with sales and marketing serves further to give political language “some of the brevity, intensity and urgency we associate with the best marketing,” while stripping away its “explanatory and argumentative power” (p.191).

          In the second half, Thompson shifts way from applying notions of classical rhetoric to public debate and focuses more directly upon the debate itself in three settings: when scientific consensus confronts spurious scientific claims; when claims for tolerance and respect for racial, religious or ethnic minorities seek to override untrammeled freedom of expression; and when, after the unprecedented and still unfathomable devastation of the 20th century’s world wars, leaders seek to take their country into war.  Thompson’s analyses of these situations are lucid and clearheaded, but for all the common sense and good judgment that he brings to them, I found this section more conventional and less original than the book’s first half, and consequently less intriguing.

* * *

       Thompson starts with a compelling example to which he returns throughout the book, involving the once ubiquitous Sarah Palin and her rhetorical attack on the Affordable Care Act (ACA), better known as Obamacare. Before the ACA was signed into law, one Elizabeth McCaughey, an analyst with the Manhattan Institute, a conservative think tank, looked at a single clause among the 1,000 plus pages of the proposed legislation and drew the conclusion that the act required patients over a certain age to be counseled by a panel of experts on the options available for ending their lives. McCaughey’s conclusion was dead wrong. The clause merely clarified that expenses would be covered for those who desired such counseling, as proponents of the legislation made clear from the outset.

         No matter. Palin grabbed the ball McCaughey had thrown out and ran with it. In one of her most Palinesque moments, the one-term Alaska governor wrote on her Facebook page:

The America I know and love is not one in which my parents or my baby with Down Syndrome will have to stand in front of Obama’s “death panel” so his bureaucrats can decide, based on a subjective judgment of their “level of productivity in society,” whether they are worthy of heath care. Such a system is downright evil (p.4-5).

By placing the words “death panel” and “level of productivity in society” in quotation marks, Palin left the impression that she was quoting from the statute itself.  Thus presented, the words conjured up early 20th century eugenics and Nazi doctors at death camps.  To her supporters, Palin had uncovered “nothing less than a conspiracy to murder” (p.7).

        In the terminology of classical rhetoric, “death panel” was an enthymeme, words that might not mean much to a neutral observer but were all that Palin’s supporters needed to “fill in the missing parts of her argument to construct a complete critique of Obamacare” (p.30).   It had the power of compression, perfect for the world of Facebook and Twitter, and the effect of a synecdoche, in which the part stands for the whole.  Its words were prophetic, taking an imagined future scenario and presenting it as current reality.  Palin’s claim was symptomatic of today’s polarized political debate. It achieved its impact “by denying any complexity, conditionality or uncertainty,” building on a presumption of “irredeemable bad faith,” and rejecting “even the possibility of a rational debate” with the statute’s supporters (p.17).

        Thompson considers Palin’s rhetorical approach distinct in keys ways from that of Donald Trump.    Writing during the 2016 presidential campaign, Thompson observes that Trump had “rewritten the playbook of American political language” (p.80). Trumpian rhetoric avoids cleverness or sophistication:

There are no cunning mousetraps like the “death panel.” The shocking statements are not couched in witty or allusive language. His campaign slogan – Make America Great Again! – could hardly be less original or artful. Everything is intended to emphasize the break with the despised language of the men and women of the Washington machine. There is a wall between them and you, Trump seems to say to his audience, but I am on this side of the wall alongside you. They treat you as stupid, but you understand things far better than they do. The guarantee that I see the world as you do is the fact that I speak in your language, not theirs (p.79-80).

        Yet Thompson roots both Palin’s populism and that of Trump in a rhetorical approach that dates from the 18th century Enlightenment termed “authenticism,” a mode of expression that prioritizes emotion and simplicity of language, and purports to engage with the “lowliest members of the chosen community” (p.155).  To the authenticst, if something “feels true, then in some sense it must be true” (p.155).  Since the Enlightenment, authenticism has been in tension with “rhetorical rationalism,” which venerates fact-based arguments and empirical thinking.  Authenticism rises as trust in public leaders declines.   Authenticists take what their rationalist opponents regard as their most egregious failings, “fantasies dressed up as facts, petulance, tribalism, loss of control of one’s own emotions,” and “flip them into strengths.”  Rationalists may consider authenticism “pitifully cruel, impossible to sustain, downright crazy,” but it can be a compelling rhetorical approach for the “right audience in the right season” (p.356).

        Authenticism found the right audience in the right season in Brexit, Britain’s June 2016 referendum vote to leave the European Union, with people voting for Brexit because they were “sick and tired of spin, prevarication and policy jargon” (p.351).   A single topic referendum such as Brexit, unlike a general election, requires a “minimum level of understanding of the issues and trade-offs involved,” Thompson writes. By this standard, the Brexit referendum should be considered a “disgrace” (p.347).  Those opposing Brexit had little to offer “in the way of positivity to counterbalance the threats; its Tory and Labour leaders seemed scarcely more enthusiastic about Britain’s membership [in] the EU than their opponents.  Their campaign was lackluster and low-energy.  They deserved to lose” (p.347).

        In understanding how classical rhetoric influences public debate, Thompson attaches particular significance to George Orwell’s famous essay “Politics and the English Language,” the “best-known and most influential reflection on public language written in English in the twentieth century” (p.136).  Although Orwell claimed that his main concern in the essay was clarity of language, what he cared most about, Thompson contends, was the “beauty of language . . . Orwell associated beauty of language with clarity, and clarity with the ability of language to express rather than prevent thought and, by so doing, to support truthful and effective political debate” (p.143).  Orwell’s essay thus embodied the “classical understanding of rhetoric,” specifically the “ancient belief that the civic value of a given piece of rhetoric is correlated with its excellence as a piece of expression” (p.143).

* * *

      In the book’s second half, Thompson looks at the public debate over a host of contentious issues that have riveted the United Kingdom and the United States in recent years, beginning with the deference that democratic debate should accord to questionable scientific claims.  So-called climate skeptics, who challenge the overwhelming scientific consensus on anthropogenic global warming, can make what superficially sounds like a compelling case that their views should be entitled to equal time in forums dedicated to the elaboration of public issues, such as those provided by the BBC or The New York Times.  Minority scientific views have themselves frequently evolved into accepted scientific understanding (one 19th century example was the underlying cause of the Irish potato famine, discussed here  in 2014 in my review of John Kelly’s The Graves Are Walking).  Refusal to accord a forum for such views can easily be cast as a “cover up.”

         Thompson shows how members of Britain’s most distinguished scientific body, the Royal Society, once responded to public skepticism over global warming by becoming advocates, presenting the scientific consensus on the need for action in terms unburdened by the caution and caveats that are usually part of scientific explanation, and emphasizing the bad faith of climate change skeptics. Its efforts largely backfired. The more scientists sound like politicians with an agenda, the “less convincing they are likely to be” (p.211).   The same issue arose when a British medical researcher claimed to have a found connection between autism and measles, mumps and rubella vaccinations. The research was subsequently found to be fraudulent, but not before a handful of celebrities and a few politicians jumped aboard an anti-vaccination movement (including, in the United States, Robert F. Kennedy, Jr., and Donald Trump, when he was more celebrity than politician), with an uncountable number of parents opting not to have their children vaccinated.

       Thompson’s discussion of the boundaries of tolerance and free speech raises a similar issue: to what degree should democratic forums include those whose views are antithetical to democratic norms. While at the BBC, Thompson needed to decide whether the BBC would invite the British National Party (BNP), which flirted with Holocaust denial but had demonstrated a substantial following at the ballot box, to a broadcast that involved representatives of Britain’s major parties. In the face of strident opposition, Thompson elected to include the BNP representative and explains why here: the public “had the right to see him and listen to him responding to questions put to him by a studio audience itself made up of people like them. They did so and drew their own conclusions” (p.263).

       Thompson also delivers a full-throated rebuke to American universities that have disinvited speakers because students objected to their views.  The way to defeat extremists and their defenders, whether in faculty lounges or the halls of power, is simply to out-argue them, he contends.  Freedom of expression is best considered a right to be enjoyed “not just by those with something public to say but by everyone” (p.262-63), as a means by which an audience can seek to reach its own judgment. With a few exceptions like child pornography or incitement to violence, Thompson finds no support for the notion that suppressing ideas of which we disapprove is a better way to defeat them in a modern democracy than confronting and debating them in public.

       In a chapter entitled simply “War,” Thompson argues that war is today the greatest rhetorical test for a political leader:

To convince a country to got to war, or to rally a people’s courage and optimism during the course of that war, depends on your ability to persuade those who are listening to you to risk sacrificing themselves and their children for some wider public purpose. It is words against life and limb. [It includes the] need for length and detail as you explain the justification of the war; the simultaneous need for brevity and emotional impact; authenticity, rationality, authority; the search for a persuasiveness that does not – cannot— sound anything like marketing given the blood and treasure that are at stake” (p.219).

        Today, it is almost impossible for any war to be well received in a democracy, except in the very short term.  This is undoubtedly an advance over the days when war was celebrated for its gallantry and chivalry. But, drawing upon the opposition to the Vietnam War in the United States in the 1960s, and to Britain’s decision to join the United States in the second Iraq war in 2003, Thompson faults anti-war rhetoric for its tendency to assume bad faith almost immediately, to “omit awkward arguments or to downplay unresolved issues, to pretend that difficult choices are easy, to talk straight past the other side in the debate, to oversimplify everything” (p.254-55).

* * *

      Thompson does not see today’s populist wave receding any time soon. “One can believe that populism always fails in the end – because of the crudity of its policies, its unwillingness to do unpopular but necessary things, its underlying divisiveness and intolerance – yet still accept that it will be a political fact of life in many western countries for years to come” (p.363).  He ends by abandoning the measured, “this-too-shall-pass” tone that prevails throughout most of his wide-ranging book to conclude on a near-apocalyptic note.   A storm is gathering, he writes, which threatens to be:

greater than any seen since the global infernos of the twentieth century. If the first premonitory gusts of a global populist storm were enough to blow Britain out of Europe and Donald Trump into the White House, what will the main blasts do? If the foretaste of the economic and social disruption to come was enough to show our public language to be almost wholly wanting in 2016, what will happen when the hurricane arrives?” (p.364).

       Is there anything we can do to restore the power of public language to cement the bonds of trust between the public and its leaders?  Can rhetorical rationalists regain the upper hand in public debate? Thompson argues that we need to “put public language at the heart of the teaching of civics . . . We need to teach our children how to parse every kind of public language” (p.322).  Secondary school and university students need to know “how to listen, how to know when someone is trying to manipulate them, how to discriminate between good arguments and bad ones, how to fight their own corner clearly and honestly” (p.366).   This seems like a sensible starting place.  But it may not be sufficient to withstand the hurricane.

Thomas H. Peebles

Bordeaux, France

January 18, 2018

 

 

 

 

 

 

 

 

 

5 Comments

Filed under American Politics, British History, Intellectual History, Language, Politics

Honest Broker

 

 

Michael Doran, Ike’s Gamble:

America’s Rise to Dominance in the Middle East 

 

       On July 26, 1956, Egypt’s President Gamal Abdel Nasser stunned the world by announcing the nationalization of the Suez Canal, a critical conduit through Egypt for the transportation of oil between the Mediterranean Sea and the Indian Ocean. Constructed between 1859 and 1869, the canal was owned by the Anglo-French Suez Canal Company. What followed three months later was the Suez Crisis of 1956: on October 29, Israeli brigades invaded Egypt across its Sinai Peninsula, advancing to within ten miles of the canal.  Britain and France, following a scheme concocted with Israel to retake the canal and oust Nasser, demanded that both Israeli and Egyptian troops withdraw from the occupied territory. Then, on November 5th, British and French forces invaded Egypt and occupied most of the Canal Zone, the territory along the canal. The United States famously opposed the joint operation and, through the United Nations, forced Britain and France out of Egypt.  Nearly simultaneously, the Soviet Union ruthlessly suppressed an uprising in Hungary.

       The autumn of 1956 was thus a tumultuous time. Across the globe, it was a time when colonies were clamoring for and achieving independence from former colonizers, and the United States and the Soviet Union were competing for the allegiance of emerging states in what was coming to be known as the Third World.  In the volatile and complex Middle East, it was a time of rising nationalism. Nasser, a wildly ambitious general who came to power after a 1952 military coup had deposed the King of Egypt, aspired to become not simply the leader of his country but also of the Arab speaking world, even the entire Muslim world.  By 1956, Nasser had emerged as the region’s most visible nationalist. But he was far from the only voice in the Middle East seeking to speak for Middle East nationalism. Syria, Jordan, Lebanon and Iraq were also imbued with the rising spirit of nationalism and saw Nasser as a rival, not a fraternal comrade-in-arms.

       Michael Doran’s Ike’s Gamble: America’s Rise to Dominance in the Middle East provides background and context for the United States’ decision not to support Britain, France and Israel during the 1956 Suez crisis. As his title suggests, Doran places America’s President, war hero and father figure Dwight D. Eisenhower, known affectionately as Ike, at the middle of the complicated Middle East web (although Nasser probably merited a place in Doran’s title: “Ike’s Gamble on Nasser” would have better captured the spirit of the narrative). Behind the perpetual smile, Eisenhower was a cold-blooded realist who was “unshakably convinced” (p.214) that the best way to advance American interests in the Middle East and hold Soviet ambitions in check was for the United States to play the role of an “honest broker” in the region, sympathetic to the region’s nationalist aspirations and not too closely aligned with its traditional allies Britain and France, or with the young state of Israel.

       But Doran, a senior fellow at the Hudson Institute and former high level official at the National Security Council and Department of Defense in the administration of George W. Bush, goes on to argue that Eisenhower’s vision of the honest broker – and his “bet” on Nasser – were undermined by the United States’ failure to recognize the “deepest drivers of the Arab and Muslim states, namely their rivalries with each other for power and authority” (p.105). Less than two years after taking Nasser’s side in the 1956 Suez Crisis, Eisenhower seemed to reverse himself.  By mid-1958, Doran reveals, Eisenhower had come to regret his bet on Nasser and his refusal to back Britain, France and Israel during the crisis. Eisenhower kept this view largely to himself, however, distorting the historical picture of his Middle East policies.

        Although Doran considers Eisenhower “one of the most sophisticated and experienced practitioners of international politics ever to reside in the White House,” the story of his relationship with Nasser is at bottom a lesson in the “dangers of calibrating the distinction between ally and enemy incorrectly” (p.13).  Or, as he puts it elsewhere, Eisenhower’s “bet” on Nasser’s regime is a “tale of Frankenstein’s monster, with the United States as the mad scientist and the new regime as his uncontrollable creation” (p.10).

* * *

      The “honest broker” approach to the Middle East dominated the Eisenhower administration from its earliest days in 1953. Eisenhower, his Secretary of State John Foster Dulles, and most of their key advisors shared a common picture of the volatile region. Trying to wind down a war in Korea they had inherited from the Truman Administration, they considered the Middle East the next and most critical region of confrontation in the global Cold War between the Soviet Union and the United States.  As they saw it, in the Middle East the United States found itself caught between Arabs and other “indigenous” nationalities on one side, and the British, French, and Israelis on the other. “Each side had hold of one arm of the United States, which they were pulling like a tug rope. The picture was so obvious to almost everyone in the Eisenhower administration that it was understood as an objective description of reality” (p.44). It is impossible, Doran writes, to exaggerate the “impact that the image of America as an honest broker had on Eisenhower’s thought . . . The notion that the top priority of the United States was to co-opt Arab nationalists by helping them extract concessions – within limits – from Britain and Israel was not open to debate. It was a view that shaped all other policy proposals” (p.10).

         Alongside Ike’s “bet” on Nasser, the book’s second major theme is the deterioration of the famous “special relationship” between Britain and the United States during Eisenhower’s first term, due in large measure to differences over Egypt, the Suez Canal, and Nasser (and, to quibble further with the book’s title, “Britain’s Fall from Power in the Middle East” in my view would have captured the spirit of the narrative better than “America’s Rise to Dominance in the Middle East”).  The Eisenhower administration viewed Britain’s once mighty empire as a relic of the past, out of place in the post World War II order. It viewed Britain’s leader, Prime Minister Winston Churchill, in much the same way. Eisenhower entered his presidency convinced that it was time for Churchill, then approaching age 80, to exit the world stage and for Britain to relinquish control of its remaining colonial possessions – in Egypt, its military base and sizeable military presence along the Suez Canal.

      Anthony Eden replaced Churchill as prime minister in 1955.  A leading anti-appeasement ally of Churchill in the 1930s, by the 1950s Eden shared Eisenhower’s view that Churchill had become a “wondrous relic” who was “stubbornly clinging to outmoded ideas” (p.20) about Britain’s empire and its place in the world.  Although interested in aligning Britain’s policies with the realities of the post World War II era, Eden led the British assault on Suez in 1956.  With  “his career destroyed” (p.202), Eden was forced to resign early in 1957.

       If the United States today also has a “special relationship” with Israel, that relationship had yet to emerge during the first Eisenhower term.  Israel’s circumstances were of course entirely different from those of Britain and France, a young country surrounded by Arab-speaking states implacably hostile to its very existence. President Truman had formally recognized Israel less than a decade earlier, in 1948.  But substantial segments of America’s foreign policy establishment in the 1950s continued to believe that such recognition had been in error. Not least among them was John Foster Dulles, Eisenhower’s Secretary of State.  There seemed to be more than a whiff of anti-Semitism in Dulles’ antagonism toward Israel.

        Describing Israel as the “darling of Jewry throughout the world” (p.98), Dulles decried the “potency of international Jewry” (p.98) and warned that the United States should not be seen as a “backer of expansionist Zionism” (p.77).  For the first two years of the Eisenhower administration, Dulles followed a policy designed to “’deflate the Jews’ . . . by refusing to sell arms to Israel, rebuffing Israeli requests for security guarantees, and diminishing the level of financial assistance to the Jewish state” (p.99).   Dulles’ views were far from idiosyncratic. Israel “stirred up deep hostility among the Arabs” and many of America’s foreign policy elites in the 1950s ”saw Israel as a liability” (p.9). Without success, the United States sought Nasser’s agreement to an Arab-Israeli accord which would have required limited territorial concessions from Israel.

       Behind the scenes, however, the United States brokered a 1954 Anglo-Egyptian agreement, by which Britain would withdraw from its military base in the Canal Zone over an 18-month period, with Egypt agreeing that Britain could return to its base in the event of a major war. Doran terms this Eisenhower’s “first bet” on Nasser. Ike “wagered that the evacuation of the British from Egypt would sate Nasser’s nationalist appetite. The Egyptian leader, having learned that the United States was willing and able to act as a strategic partner, would now keep Egypt solidly within the Western security system. It would not take long before Eisenhower would come to realize that Nasser’s appetite only increased with eating” (p.67-68).

        As the United States courted Nasser as a voice of Arab nationalism and a bulwark against Soviet expansion into the region, it also encouraged other Arab voices. In what the United States imprecisely termed the “Northern Tier,” it supported security pacts between Turkey and Iraq and made overtures to Egypt’s neighbors Syria and Jordan. Nasser adamantly opposed these measures, considering them a means of constraining his own regional aspirations and preserving Western influence through the back door.  The “fatal intellectual flaw” of the United States’ honest broker strategy, Doran argues, was that it “imagined the Arabs and Muslims as a unified bloc. It paid no attention whatsoever to all of the bitter rivalries in the Middle East that had no connection to the British and Israeli millstones. Consequently, Nasser’s disputes with his rivals simply did not register in Washington as factors of strategic significance” (p.78).

           In September 1955, Nasser shocked the United States by concluding an agreement to buy arms from the Soviet Union, through Czechoslovakia, one of several indications that he was at best playing the West against the Soviet Union, at worst tilting toward the Soviet side.  Another came in May 1956, when Egypt formally recognized Communist China. In July 1956, partially in reaction to Nasser’s pro-Soviet dalliances, Dulles informed the Egyptian leader that the United States was pulling out of a project to provide funding for a dam across the Nile River at Aswan, Nasser’s “flagship development project . . . [which was] expected to bring under cultivation hundreds of thousands of acres of arid land and to generate millions of watts of electricity” (p.167).

         Days later, Nasser countered by announcing the nationalization of the Suez Canal, predicting that the tolls collected from ships passing through the canal would pay for the dam’s construction within five years. Doran characterizes Nasser’s decision to nationalize the canal as the “single greatest move of his career.” It is impossible to exaggerate, he contends, the “power of the emotions that the canal takeover stirred in ordinary Egyptians. If Europeans claimed that the company was a private concern, Egyptians saw it as an instrument of imperial exploitation – ‘a state within a state’. . . [that was] plundering a national asset for the benefit of France and Britain” (p.171).

            France, otherwise largely missing in Doran’s detailed account, concocted the scheme that led to the October 1956 crisis.  Concerned that Nasser was providing arms to anti-French rebels in Algeria, France proposed to Israel what Doran terms a “stranger than fiction” (p.189) plot by which the Israelis would invade Egypt. Then, in order to protect shipping through the canal, France and Britain would:

issue an ultimatum demanding that the belligerents withdraw to a position of ten miles on either side of the canal, or face severe consequences. The Israelis, by prior arrangement, would comply. Nasser, however, would inevitably reject the ultimatum, because it would leave Israeli forces inside Egypt while simultaneously compelling Egyptian forces to withdraw from their own sovereign territory. An Anglo-French force would then intervene to punish Egypt for noncompliance. It would take over the canal and, in the process, topple Nasser (p.189).

The crisis unfolded more or less according to this script when Israeli brigades invaded Egypt on October 29th and Britain and France launched their joint invasion on November 5th. Nasser sunk ships in the canal and blocked oil tankers headed through the canal to Europe.

         Convinced that acquiescence in the invasion would drive the entire Arab world to the Soviet side in the global Cold War, the United States issued measured warnings to Britain and France to give up their campaign and withdraw from Egyptian soil. If Nasser was by then a disappointment to the United States, Doran writes, the “smart money was still on an alliance with moderate nationalism, not with dying empires” (p.178). But when Eden telephoned the White House on November 7, 1956, largely to protest the United States’ refusal to sell oil to Britain, Ike went further. In that phone call, Eisenhower as honest broker “decided that Nasser must win the war, and that he must be seen to win” (p.249).  Eisenhower’s hardening toward his traditional allies a week into the crisis, Doran contends, constituted his “most fateful decision of the Suez Crisis: to stand against the British, French, and Israelis in [a] manner that was relentless, ruthless, and uncompromising . . . [Eisenhower] demanded, with single-minded purpose, the total and unconditional British, French, and Israeli evacuation from Egypt. These steps, not the original decision to oppose the war, were the key factors that gave Nasser the triumph of his life” (p.248-49).

        When the financial markets caught wind of the blocked oil supplies, the value of the British pound plummeted and a run on sterling reserves ensued. “With his currency in free fall, Eden became ever more vulnerable to pressure from Eisenhower. Stabilizing the markets required the cooperation of the United States, which the Americans refused to give until the British accepted a complete, immediate, and unconditional withdrawal from Egypt” (p.196). At almost the same time, Soviet tanks poured into Budapest to suppress a burgeoning Hungarian pro-democracy movement. The crisis in Eastern Europe had the effect of “intensifying Eisenhower’s and Dulles’s frustration with the British and the French. As they saw it, Soviet repression in Hungary offered the West a prime opportunity to capture the moral high ground in international politics – an opportunity that the gunboat diplomacy in Egypt was destroying” (p.197). The United States supported a United Nations General Assembly resolution calling for an immediate ceasefire and withdrawal of invading troops. Britain, France and Israel had little choice bu to accept these terms in December 1956.

       In the aftermath of the Suez Crisis, the emboldened Nasser continued his quest to become the region’s dominant leader. In February 1958, he engineered the formation of the United Arab Republic, a political union between Egypt and Syria that he envisioned as the first step toward a broader pan-Arab state (in fact, the union lasted only until 1961). He orchestrated a coup in Iraq in July 1958. Later that month, Eisenhower sent American troops into Lebanon to avert an Egyptian-led uprising against the pro-western government of Christian president Camille Chamoun. Sometime in the period between the Suez Crisis of 1956 and the intervention in Lebanon in 1958, Doran argues, Eisenhower withdrew his bet on Nasser, coming to the view that his support of Egypt during the 1956 Suez crisis had been a mistake.

        The Eisenhower of 1958 “consistently and clearly argued against embracing Nasser” (p.231).  He now viewed Nasser as a hardline opponent of any reconciliation between Arabs and Israel, squarely in the Soviet camp. Eisenhower, a “true realist with no ideological ax to grind,” came to recognize that his Suez policy of “sidelining the Israelis and the Europeans simply did not produce the promised results. The policy was . . . a blunder” (p.255).   Unfortunately, Doran argues, Eisenhower kept his views to himself until well into the 1960s and few historians picked up on his change of mind. This allowed those who sought to distance United States policy from Israel to cite Eisenhower’s stance in the 1956 Suez Crisis, without taking account of Eisenhower’s later reconsideration of that stance.

* * *

      Doran relies upon an extensive mining of diplomatic archival sources, especially those of the United States and Great Britain, to piece together this intricate depiction of the Eisenhower-Nasser relationship and the 1956 Suez Crisis. These sources allow Doran to emphasize the interactions of the key actors in the Middle East throughout the 1950s, including personal animosities and rivalries, and intra-governmental turf wars.  He writes in a straightforward, unembellished style. Helpful subheadings within each chapter make his detailed and sometimes dense narrative easier to follow. His work will appeal to anyone who has worked in an Embassy overseas, to Middle East and foreign policy wonks, and to general readers with an interest in the 1950s.

Thomas H. Peebles

Saint Augustin-de-Desmaures

Québec, Canada

June 19, 2017

11 Comments

Filed under American Politics, British History, Uncategorized, United States History, World History

Portrait of a President Living on Borrowed Time

Joseph Lelyveld, His Final Battle:

The Last Months of Franklin Roosevelt 

            During the last year and a half of his life, from mid-October 1943 to his death in Warm Springs, Georgia on April 12, 1945, Franklin D. Roosevelt’s presidential plate was full, even overflowing. He was grappling with winning history’s most devastating  war and structuring a lasting peace for the post-war global order, all the while tending to multiple domestic political demands. But Roosevelt spent much of this time out of public view in semi-convalescence, often in locations outside Washington, with limited contact with the outside world. Those who met the president, however, noticed a striking weight loss and described him with words like “listless,” “weary,” and “easily distracted.” We now know that Roosevelt had life-threatening high blood pressure, termed malignant hypertension, making him susceptible to a stroke or coronary attack at any moment. Roosevelt’s declining health was carefully shielded from the public and only rarely discussed directly, even within his inner circle. At the time, probably not more than a handful of doctors were aware of the full gravity of Roosevelt’s physical condition, and it is an open question whether Roosevelt himself was aware.

In His Final Battle: The Last Months of Franklin Roosevelt, Joseph Lelyveld, former executive editor of the New York Times, seeks to shed light upon, if not answer, this open question. Lelyveld suggests that the president likely was more aware than he let on of the implications of his declining physical condition. In a resourceful portrait of America’s longest serving president during his final year and a half, Lelyveld considers Roosevelt’s political activities against the backdrop of his health. The story is bookended by Roosevelt’s meetings to negotiate the post-war order with fellow wartime leaders Winston Churchill and Joseph Stalin, in Teheran in December 1943 and at Yalta in the Crimea in February 1945. Between the two meetings came Roosevelt’s 1944 decision to run for an unprecedented fourth term, a decision he reached just weeks prior to the Democratic National Convention that summer, and the ensuing campaign.

Lelyveld’s portrait of a president living on borrowed time emerges from an excruciatingly thin written record of Roosevelt’s medical condition. Roosevelt’s medical file disappeared without explanation from a safe at Bethesda Naval Hospital shortly after his death.   Unable to consider Roosevelt’s actual medical records, Lelyveld draws clues  concerning his physical condition from the diary of Margaret “Daisy” Suckley, discovered after Suckley’s death in 1991 at age 100, and made public in 1995. The slim written record on Roosevelt’s medical condition limits Lelyveld’s ability to tease out conclusions on the extent to which that condition may have undermined his job performance in his final months.

* * *

            Daisy Suckley, a distant cousin of Roosevelt, was a constant presence in the president’s life in his final years and a keen observer of his physical condition. During Roosevelt’s last months, the “worshipful” (p.3) and “singularly undemanding” Suckley had become what Lelyveld terms the “Boswell of [Roosevelt’s] rambling ruminations,” secretly recording in an “uncritical, disjointed way the hopes and daydreams” that occupied the frequently inscrutable president (p.75). By 1944, Lelyfeld notes, there was “scarcely a page in Daisy’s diary without some allusion to how the president looks or feels” (p.77).   Lelyveld relies heavily upon the Suckley diary out of necessity, given the disappearance of Roosevelt’s actual medical records after his death.

Lelyveld attributes the disappearance to Admiral Ross McIntire, an ears-nose-and-throat specialist who served both as Roosevelt’s personal physician and Surgeon General of the Navy. In the latter capacity, McIntire oversaw a wartime staff of 175,000 doctors, nurses and orderlies at 330 hospitals and medical stations around the world. Earlier in his career, Roosevelt’s press secretary had upbraided McIntire for allowing the president to be photographed in his wheel chair. From that point forward, McIntire understood that a major component of his job was to conceal Roosevelt’s physical infirmities and protect and promote a vigorously healthy public image of the president. The “resolutely upbeat” (p.212) McIntire, a master of “soothing, well-practiced bromides” (p.226), thus assumes a role in Lelyveld’s account which seems as much “spin doctor” as actual doctor. His most frequent message for the public was that the president was in “robust health” (p.22), in the process of “getting over” a wide range of lesser ailments such as a heavy cold, flu, or bronchitis.

A key turning point in Lelyveld’s story occurred in mid-March 1944, 13 months prior to Roosevelt’s death, when the president’s daughter Anna Roosevelt Boettiger confronted McIntire and demanded to know more about what was wrong with her father. McIntire doled out his “standard bromides, but this time they didn’t go down” (p.23). Anna later said that she “didn’t think McIntire was an internist who really knew what he was talking about” (p.93). In response, however, McIntire brought in Dr. Howard Bruenn, the Navy’s top cardiologist. Evidently, Lelyveld writes, McIntire had “known all along where the problem was to be found” (p.23). Breunn was apparently the first cardiologist to have examined Roosevelt.

McIntire promised to have Roosevelt’s medical records delivered to Bruenn prior to his initial examination of the president, but failed to do so, an “extraordinary lapse” (p.98) which Lelyveld regards as additional evidence that McIntire was responsible for the disappearance of those records after Roosevelt’s death the following year. Breunn found that Roosevelt was suffering from “acute congestive heart failure” (p.98). He recommended that the wartime president avoid “irritation,” severely cut back his work hours, rest more, and reduce his smoking habit, then a daily pack and a half of Camel’s cigarettes. In the midst of the country’s struggle to defeat Nazi Germany and imperial Japan, its leader was told that he “needed to sleep half his time and reduce his workload to that of a bank teller” (p.99), Lelyveld wryly notes.  Dr. Bruenn saw the president regularly from that point onward, traveling with him to Yalta in February 1945 and to Warm Springs in April of that year.

Ten days after Dr. Bruenn’s diagnosis, Roosevelt told a newspaper columnist, “I don’t work so hard any more. I’ve got this thing simplified . . . I imagine I don’t work as many hours a week as you do” (p.103). The president, Lelyveld concludes, “seems to have processed the admonition of the physicians – however it was delivered, bluntly or softly – and to be well on the way to convincing himself that if he could survive in his office by limiting his daily expenditure of energy, it was his duty to do so” (p.103).

At that time, Roosevelt had not indicated publicly whether he wished to seek a 4th precedential term and had not discussed this question with any of his advisors. Moreover, with the “most destructive military struggle in history approaching its climax, there was no one in the White House, or his party, or the whole of political Washington, who dared stand before him in the early months of 1944 and ask face-to-face for a clear answer to the question of whether he could contemplate stepping down” (p.3). The hard if unspoken political truth was that Roosevelt was the Democratic party’s only hope to retain the White House. There was no viable successor in the party’s ranks. But his re-election was far from assured, and public airing of concerns about his health would be unhelpful to say the least in his  re-election bid. Roosevelt did not make his actual decision to run until just weeks before the 1944 Democratic National Convention in Chicago.

At the convention, Roosevelt’s then vice-president, Henry Wallace, and his counselors Harry Hopkins, and Jimmy Byrnes jockeyed for the vice-presidential nomination, along with William Douglas, already a Supreme Court justice at age 45. There’s no indication that Senator Harry S. Truman actively sought to be Roosevelt’s running mate. Lelyveld writes that it is tribute to FDR’s “wiliness” that the notion has persisted over the years that he was “only fleetingly engaged in the selection” of his 1944 vice-president and that he was “simply oblivious when it came to the larger question of succession” (p.172). To the contrary, although he may not have used the used the word “succession” in connection with his vice-presidential choice, Roosevelt “cared enough about qualifications for the presidency to eliminate Wallace as a possibility and keep Byrnes’s hopes alive to the last moment, when, for the sake of party unity, he returned to Harry Truman as the safe choice” (p.172-73).

Having settled upon Truman as his running mate, Roosevelt indicated that he did not want to campaign as usual because the war was too important. But campaign he did, and Lelyveld shows how hard he campaigned – and how hard it was for him given his deteriorating health, which aggravated his mobility problems. The outcome was in doubt up until Election Day, but Roosevelt was resoundingly reelected to a fourth presidential term. The president could then turn his full attention to the war effort, focusing both upon how the war would be won and how the peace would be structured. Roosevelt’s foremost priority was structuring the peace; the details on winning the war were largely left to his staff and to the military commanders in the field.

Roosevelt badly wanted to avoid the mistakes that Woodrow Wilson had made after World War I. He was putting together the pieces of an organization already referred to as the United Nations and fervently sought  the participation and support of his war ally, the Soviet Union. He also wanted Soviet support for the war against Japan in the Pacific after the Nazi surrender, and for an independent and democratic Poland. In pursuit of these objectives, Roosevelt agreed to travel over 10,000 arduous miles to Yalta, to meet in February 1945 with Stalin and Churchill.

In Roosevelt’s mind, Stalin  was by then both the key to victory on the battlefield and for a lasting peace afterwards — and he was, in Roosevelt’s phrase, “get-at-able” (p.28) with the right doses of the legendary Roosevelt charm.   Roosevelt had begun his serious courtship of the Soviet leader at their first meeting in Teheran in December 1943.  His fixation on Stalin, “crossing over now and then into realms of fantasy” (p.28), continued at Yalta. Lelyveld’s treatment of Roosevelt at Yalta covers similar ground to that in Michael Dobbs’ Six Months That Shook the World, reviewed here in April 2015. In Lelyveld’s account, as in that of Dobbs, a mentally and physical exhausted Roosevelt at Yalta ignored the briefing books his staff prepared for him and relied instead upon improvisation and his political instincts, fully confident that he could win over Stalin by force of personality.

According to cardiologist Bruenn’s memoir, published a quarter of a century later, early in the conference Roosevelt showed worrying signs of oxygen deficiency in his blood. His habitually high blood pressure readings revealed a dangerous condition, pulsus alternans, in which every second heartbeat was weaker than the preceding one, a “warning signal from an overworked heart” (p.270).   Dr. Bruenn ordered Roosevelt to curtail his activities in the midst of the conference. Churchill’s physician, Lord Moran, wrote that Roosevelt had “all the symptoms of hardening of arteries in the brain” during the conference and gave the president “only a few months to live” (p.270-71). Churchill himself commented that his wartime ally “really was a pale reflection almost throughout” (p.270) the Yalta conference.

Yet, Roosevelt recovered sufficiently to return home from the conference and address Congress and the public on its results, plausibly claiming victory. The Soviet Union had agreed to participate in the United Nations and in the war in Asia, and to hold what could be construed as free elections in Poland. Had he lived longer, Roosevelt would have seen that Stalin delivered as promised on the Asian war. The Soviet Union also became a member of the United Nations and maintained its membership in the organization until its dissolution in 1991, but was rarely if ever the partner Roosevelt envisioned in keeping world peace. The possibility of a democratic Poland, “by far the knottiest and most time-consuming issue Roosevelt confronted at Yalta” (p.285), was by contrast slipping away even before Roosevelt’s death.

At one point in his remaining weeks, Roosevelt exclaimed, “We can’t do business with Stalin. He has broken every one of the promises he made at Yalta” on Poland (p.304; Dobbs includes the same quotation, adding that Roosevelt thumped on his wheelchair at the time of this outburst). But, like Dobbs, Lelyveld argues that even a more physically fit, fully focused and coldly realistic Roosevelt would likely have been unable to save Poland from Soviet clutches. When the allies met at Yalta, Stalin’s Red Army was in the process of consolidating military control over almost all of Polish territory.  If Roosevelt had been at the peak of vigor, Lelyveld concludes, the results on Poland “would have been much the same” (p.287).

Roosevelt was still trying to mend fences with Stalin on April 11, 1945, the day before his death in Warm Springs. Throughout the following morning, Roosevelt worked on matters of state: he received an update on the US military advances within Germany and even signed a bill, sustaining the Commodity Credit Corporation. Then, just before lunch Roosevelt collapsed. Dr. Bruenn arrived about 15 minutes later and diagnosed a hemorrhage in the brain, a stroke likely caused by the bursting of a blood vessel in the brain or the rupture of an aneurysm. “Roosevelt was doomed from the instant he was stricken” (p.323).  Around midnight, Daisy Suckley recorded in her diary that the president had died at 3:35 pm that afternoon. “Franklin D. Roosevelt, the hope of the world, is dead,” (p.324), she wrote.

Daisy was one of several women present at Warm Springs to provide company to the president during his final visit. Another was Eleanor Roosevelt’s former Secretary, Lucy Mercer Rutherford, by this time the primary Other Woman in the president’s life. Rutherford had driven down from South Carolina to be with the president, part of a recurring pattern in which Rutherford appeared in instances when wife Eleanor was absent, as if coordinated by a social secretary with the knowing consent of all concerned. But this orchestration broke down in Warm Springs in April 1945. After the president died, Rutherford had to flee in haste to make room for Eleanor. Still another woman in the president’s entourage, loquacious cousin Laura Delano, compounded Eleanor’s grief by letting her know that Rutherford had been in Warm Springs for the previous three days, adding gratuitously that Rutherford had also served as hostess at occasions at the White House when Eleanor was away. “Grief and bitter fury were folded tightly in a large knot” (p.325) for the former First Lady at Warm Springs.

Subsequently, Admiral McIntire asserted that Roosevelt had a “stout heart” and that his blood pressure was “not alarming at any time” (p.324-25), implying that the president’s death from a stroke had proven that McIntire had “always been right to downplay any suggestion that the president might have heart disease.” If not a flat-out falsehood, Lelyveld argues, McIntire’s assertion “at least raises the question of what it would have taken to alarm him” (p.325). Roosevelt’s medical file by this time had gone missing from the safe at Bethesda Naval Hospital, most likely removed by the Admiral because it would have revealed the “emptiness of the reassurances he’d fed the press and the public over the years, whenever questions arose about the president’s health” (p.325).

* * *

           Lelyveld declines to engage in what he terms an “argument without end” (p.92) on the degree to which Roosevelt’s deteriorating health impaired his job performance during his last months and final days. Rather, he  skillfully pieces together the limited historical record of Roosevelt’s medical condition to add new insights into the ailing but ever enigmatic president as he led his country nearly to the end of history’s most devastating war.

 

Thomas H. Peebles

La Châtaigneraie, France

March 28, 2017

 

 

 

4 Comments

Filed under American Politics, Biography, European History, History, United States History, World History

Do Something

zachary-1

zachary-2

Zachary Kaufman, United States Law and Policy on Transitional Justice:

Principles, Politics, and Pragmatics 

             The term “transitional justice” is applied most frequently to “post conflict” situations, where a nation state or region is emerging from some type of war or violent conflict that has given rise to genocide, war crimes, or crimes against humanity — each now a recognized concept under international law, with “mass atrocities” being a common shorthand used to embrace these and related concepts. In United States Law and Policy on Transitional Justice: Principles, Politics, and Pragmatics, Zachary Kaufman, a Senior Fellow and expert on human rights at Harvard University’s Kennedy School of Government, explores the circumstances which have led the United States to support that portion of the transitional justice process that determines how to deal with suspected perpetrators of mass atrocities, and why it chooses a particular means of support (disclosure: Kaufman and I worked together in the US Department of Justice’s overseas assistance unit between 2000 and 2002, although we had different portfolios: Kaufman’s involved Africa and the Middle East, while I handled Central and Eastern Europe).

          Kaufman’s book, adapted from his Oxford University PhD dissertation, centers around case studies of the United States’ role in four major transitional justice situations: Germany and Japan after World War II, and ex-Yugoslavia and Rwanda in the 1990s, after the end of the Cold War. It also looks more briefly at two secondary cases, the 1988 bombing of Pan American flight 103, attributed to Libyan nationals, and atrocities committed during Iraq’s 1990-91 occupation of Kuwait. Making extensive use of internal US government documents, many of which have been declassified, Kaufman digs deeply into the thought processes that informed the United States’ decisions on transnational justice in these six post-conflict situations. Kaufman brings a social science perspective to his work, attempting to tease of out of the case studies general rules about how the United States might act in future transitional justice situations.

          The term “transitional justice” implicitly affirms that a permanent and independent national justice system can and should be created or restored in the post-conflict state.  Kaufman notes at one point that dealing with suspected perpetrators of mass atrocities is just one of several critical tasks involved in creating or restoring a permanent national justice system in a post-conflict state.  Others can include: building or rebuilding sustainable judicial institutions, strengthening the post-conflict state’s legislation, improving capacity of its justice-sector personnel, and creating or upgrading the physical infrastructure needed for a functioning justice system. These latter tasks are not the focus of Kaufman’s work. Moreover, in determining how to deal with alleged perpetrators of mass atrocities, Kaufman’s focus is on the front end of the process: how and why the United States determined to support this portion of the process generally and why it chose particular mechanisms rather than others.   The outcomes that the mechanisms produce, although mentioned briefly, are not his focus either.

          In each of the four primary cases, the United States joined other nations to prosecuted those accused or suspected of involvement in mass atrocities before an international criminal tribunal, which Kaufman characterizes as the “most significant type of transitional justice institution” (p.12). Prosecution before an international tribunal, he notes, can promote stability, the rule of law and accountability, and can serve as a deterrent to future atrocities. But the process can be both slow and expensive, with significant political and legal risks. Kaufman’s work provides a useful reminder that prosecution by an international tribunal is far from the only option available to deal with alleged perpetrators of mass atrocities. Others include trials in other jurisdictions, including those of the post-conflict state, and several non-judicial alternatives: amnesty for those suspected of committing mass atrocities, with or without conditions; “lustration,” where suspected persons are disenfranchised from specific aspects of civic life (e.g., declared ineligible for the civil service or the military); and “doing nothing,” which Kaufman considers tantamount to unconditional amnesty.  Finally, there is the option of summary execution or other punishment, without benefit of trial. These options can be applied in combination, e.g., amnesty for some, trial for others.

         Kaufman weighs two models, “legalism” and “prudentialism,” as potential explanations for why and how the United States acted in the cases under study and is likely to act in the future.  Legalism contends that prosecution before an international tribunal of individuals suspected or accused of mass atrocities  is the only option a liberal democratic state may elect, consistent with its adherence to the rule of law.  In limited cases, amnesty or lustrations may be justified as a supplement to initiating cases before a tribunal. Summary execution may never be justified. Prudentialism is more ad hoc and flexible,with  the question whether to establish or invoke an international criminal tribunal or pursue other options determined by any number of different political, pragmatic and normative considerations, including such geo-political factors as promotion of stability in the post-conflict state and region, the determining state or states’ own national security interests, and the relationships between determining states. Almost by definition, legalism precludes consideration of these factors.

          Kaufman presents his cases in a highly systematic manner, with tight overall organization. An introduction and three initial chapters set forth the conceptual framework for the subsequent case studies, addressing matters like methodology and definitional parameters.  The four major cases are then treated in four separate chapters, each with its own introduction and conclusion, followed by an overall conclusion, also with its own introduction and conclusion (the two secondary cases, Libya and Iraq are treated within the chapter on ex-Yugoslavia).  Substantive headings throughout each chapter make his arguments easy to follow.   General readers may find jarring his extensive use of acronyms throughout the text, drawn from a three-page list contained at the outset. But amidst Kaufman’s deeply analytical exploration of the thinking that lay behind the United States’ actions, readers will appreciate his decidedly non-sociological hypothesis as to why the United States elects to engage in  the transitional justice process: a deeply felt American need in the wake of mass atrocities to “do something” (always in quotation marks).

* * *

          Kaufman begins his case studies with the best-known example of transitional justice, Nazi Germany after World War II. The United States supported creation of what has come to be known as the Nuremberg War Crimes tribunal, a military court administered by the four victorious allies, the United States, Soviet Union, Great Britain and France. The Nuremberg story is so well known, thanks in part to “Judgment at Nuremberg,” the best-selling book and popular film, that most readers will assume that the multi-lateral Nuremberg trials were the only option seriously under consideration at the time. To the contrary, Kaufman demonstrates that such trials were far from the only option on the table.

        For a while the United States seriously considered summary executions of accused Nazi leaders. British Prime Minister Winston Churchill pushed this option during wartime deliberations and, Kaufman indicates, President Roosevelt seemed at times on the cusp of agreeing to it. Equally surprisingly, Soviet Union leader Joseph Stalin lobbied early and hard for a trial process rather than summary executions. The Nuremberg Tribunal “might not have been created without Stalin’s early, constant, and forceful lobbying” (p.89), Kaufman contends.  Roosevelt abandoned his preference for summary executions after economic aspects of the Morgenthau Plan, which involved the “pastoralization” of Germany, were leaked to the press. When the American public “expressed its outrage at treating Germany so harshly through a form of economic sanctions,” Roosevelt concluded that Americans would be “unsupportive of severe treatment for the Germans through summary execution” (p.85).

          But the United States’ support for war crimes trials became unwavering only after Roosevelt died in April 1945 and Harry S. Truman assumed the presidency.  The details and mechanics of a multi-lateral trial process were not worked out until early August 1945 in the “London Agreement,” after Churchill had been voted out of office and Labor Prime Minister Clement Atlee represented Britain. Trials against 22 high level Nazi officials began in November 1945, with verdicts rendered in October 1946: twelve defendants were sentenced to death, seven drew prison sentences, and three were acquitted.

       Many lower level Nazi officials were tried in unilateral prosecutions by one of the allied powers.   Lustration, barring active Nazi party members from major public and private positions, was applied in the US, British, and Soviet sectors.  Numerous high level Nazi officials were allowed to emigrate to the United States to assist in Cold War endeavors, which Kaufman characterizes as a “conditional amnesty” (Nazi war criminals who emigrated to the United States is the subject of Eric Lichtblau’s The Nazis Next Door: How America Became a Safe Haven for Hitler’s Men, reviewed here in October 2015; Frederick Taylor’s Exorcising Hitler: The Occupation and Denazification of Germany, reviewed here in December 2012, addresses more generally the manner in which the Allies dealt with lower level Nazi officials). By 1949, the Cold War between the Soviet Union and the West undermined the allies’ appetite for prosecution, with the Korean War completing the process of diverting the world’s attention away from Nazi war criminals.

          The story behind creation of the International Military Tribunal for the Far East, designed to hold accountable accused Japanese perpetrators of mass atrocities, is far less known than that of Nuremberg, Kaufman observes.  What has come to be known as the “Tokyo Tribunal” largely followed the Nuremberg model, with some modifications. Even though 11 allies were involved, the United States was closer to the sole decision-maker on the options to pursue in Japan than it had been in Germany. As the lead occupier of post-war Japan, the United States had “no choice but to ‘do something’” (p.119).   Only the United States had both the means and will to oversee the post-conflict occupation and administration of Japan. That oversight authority was vested largely in a single individual, General Douglas MacArthur, Supreme Commander of the Allied forces, whose extraordinarily broad – nearly dictatorial — authority in post World War II Japan extended to the transitional justice process. MacArthur approved appointments to the tribunal, signed off on its indictments, and exercised review authority over its decisions.

            In the interest of securing the stability of post-war Japan, the United States accorded unconditional amnesty to Japan’s Emperor Hirohito. The Tokyo Tribunal indicted twenty-eight high-level Japanese officials, but more than fifty were not indicted, and thus also benefited from an unconditional amnesty. This included many suspected of “direct involvement in some of the most horrific crimes of WWII” (p.108), several of whom eventually returned to Japanese politics. Through lustration, more than 200,000 Japanese were removed or barred from public office, either permanently or temporarily.  As in Germany, by the late 1940s the emerging Cold War with the Soviet Union had chilled the United States’ enthusiasm for prosecuting Japanese suspected of war crimes.

           The next major United States engagements in transitional justice arose in the 1990s, when the former Yugoslavia collapsed and the country lapsed into a spasm of ethnic violence; and massive ethnic-based genocide erupted in Rwanda in 1994. By this time, the Soviet Union had itself collapsed and the Cold War was over. In both instances, heavy United States’ involvement in the post-conflict process was attributed in part to a sense of remorse for its lack of involvement in the conflicts themselves and its failure to halt the ethnic violence, resulting in a need to “do something.”  Rwanda marks the only instance among the four primary cases where mass atrocities arose out of an internal conflict.

       The ethnic conflicts in Yugoslavia led to the creation of the International Criminal Tribunal for Yugoslavia (ICTY), based in The Hague and administered under the auspices of the United Nations Security Council. Kaufman provides much useful insight into the thinking behind the United States’ support for the creation of the court and the decision to base it in The Hague as an authorized Security Council institution. His documentation shows that United States officials consistently invoked the Nuremberg experience. The United States supported a multi-lateral tribunal through the Security Council because the council could “obligate all states to honor its mandates, which would be critical to the tribunal’s success” (p.157). The United States saw the ICTY as critical in laying a foundation for regional peace and facilitating reconciliation among competing factions. But it also supported the ICTY and took a lead role in its design to “prevent it from becoming a permanent [tribunal] with global reach” (p.158), which it deemed “potentially problematic” (p.157).

             The United States’ willingness to involve itself in the post-conflict transitional process in Rwanda,   even more than in the ex-Yugoslavia, may be attributed to its failure to intervene during the worst moments of the genocide itself.  That the United States “did not send troops or other assistance to Rwanda perversely may have increased the likelihood of involvement in the immediate aftermath,” Kaufman writes. A “desire to compensate for its foreign policy failures in Rwanda, if not also feelings of guilt over not intervening, apparently motivated at least some [US] officials to support a transitional justice institution for Rwanda” (p.197).

        Once the Rwandan civil war subsided, there was a strong consensus within the international community that some kind of international tribunal was needed to impose accountability upon the most egregious génocidaires; that any such tribunal should operate under the auspices of the United Nations Security Council; that the tribunal should in some sense be modeled after the ICTY; and that the United States shouldtake the lead in establishing the tribunal. The ICTY precedent prompted US officials to “consider carefully the consistency with which they applied transitional justice solutions in different regions; they wanted the international community to view [the US] as treating Africans similarly to Europeans” (p.182). According to these officials, after the precedent of proactive United States involvement in the “arguably less egregious Balkans crisis,” the United States would have found it “politically difficult to justify inaction in post-genocide Rwanda” (p.182).

           The United States favored a tribunal modeled after and structurally similar to the ICTY, which came to be known as International Criminal Tribunal for Rwanda (ICTR). The ICTR was the first international court having competence to “prosecute and punish individuals for egregious crimes committed during an internal conflict” (p.174), a watershed development in international law and transitional justice.  To deal with lower level génocidaires, the Rwandan government and the international community later instituted additional prosecutorial measures, including prosecutions by Rwandan domestic courts and local domestic councils, termed gacaca.

          No international tribunals were created in the two secondary cases, Libya after the 1998 Pan Am flight 103 bombing, and the 1990-91 Iraqi invasion of Kuwait. At the time of the Pam Am bombing, several years prior to the September 11, 2001 attacks, United States officials considered terrorism a matter to be addressed “exclusively in domestic contexts” (p.156).  In the case of the bombing of Pan Am 103, where Americans had been killed, competent courts were available in the United States and the United Kingdom. There were numerous documented cases of Iraqi atrocities against Kuwaiti civilians committed during Iraq’s 1990-91 invasion of Kuwait.  But the 1991 Gulf War, while driving Iraq out of Kuwait, otherwise left Iraqi leader Saddam Hussein in power. The United States was therefore not in a position to impose accountability upon Iraqis for atrocities committed in Kuwait, as it had done after defeating Germany and Japan in World War II.

* * *

         In evaluating the prudentialism and legalism models as ways to explain the United States’ actions in the four primary cases, prudentialism emerges as a clear winner.  Kaufman convincingly demonstrates that the United States in each was open to multiple options and motivated by geo-political and other non-legal considerations.  Indeed, it is difficult to imagine that the United States – or any other state for that matter — would ever, in advance, agree to disregard such considerations, as the legalism model seems to demand. After reflecting upon Kaufman’s analysis, I concluded that legalism might best be understood as more aspirational than empirical, a forward-looking, prescriptive model as to how the United States should act in future transitional justice situations, favored in particular by human rights organizations.

         But Kaufman also shows that the United States’ approach in each of the four cases was not entirely an ad hoc weighing of geo-political and related considerations.  Critical to his analysis are the threads which link the four cases, what he terms “path dependency,” whereby the Nuremberg trial process for Nazi war criminals served as a powerful influence upon the process set up for their Japanese counterparts; the combined Nuremberg-Tokyo experience weighed heavily in the creation of ICTY; and ICTY strongly influenced the structure and procedure of ICTR.   This cumulative experience constitutes another factor in explaining why the United States in the end opted for international criminal tribunals in each of the four cases.

         If a general rule can be extracted from Kaufman’s four primary cases, it might therefore be that an international criminal tribunal has evolved into the “default option” for the United States in transitional justice situations,  showing the strong pull of the only option which the legalism model considers consistent with the rule of law.  But these precedents may exert less hold on US policy makers going forward, as an incoming administration reconsiders the United States’ role in the 21st century global order. Or, to use Kaufman’s apt phrase, there may be less need felt for the United States to “do something” in the wake of future mass atrocities.

Thomas H. Peebles

Venice, Italy

February 10, 2017

 

5 Comments

Filed under American Politics, United States History

Can’t Forget the Motor City

detroit-1

detroit-2

detroit-3

David Maraniss, Once In a Great City: A Detroit Story

     In 1960, Detroit was the automobile capital of the world, America’s undisputed center of manufacturing, and its fifth most populous city, with that year’s census tallying 1.67 million people. Fifty years later, the city had lost nearly a million people; its population had dropped to 677,000 and it ranked 21st in population among America’s cities in the 2010 census. Then, in 2013, the city reinforced its image as an urban basket case by ignominiously filing for bankruptcy. In Once In a Great City: A Detroit Story, David Maraniss, a native Detroiter of my generation and a highly skilled journalist whose previous works include books on Barack Obama, Bill Clinton and Vince Lombardi, focuses upon Detroit before its precipitous fall, an 18-month period from late 1962 to early 1964.   This was the city’s golden moment, Maraniss writes, when Detroit “seemed to be glowing with promise. . . a time of uncommon possibility and freedom when Detroit created wondrous and lasting things” (p.xii-xiii; in March 2012, I reviewed here two books on post World War II Detroit, under the title “Tales of Two Cities”).

      Detroit produced more cars in this 18 month period than Americans produced babies.  Barry Gordy Jr.’s popular music empire, known officially and affectionately as “Motown,” was selling a new, upbeat pop music sound across the nation and around the world.  Further, at a time when civil rights for African-Americans had become America’s most morally compelling issue, race relations in a city then about one-third black appeared to be as good as anywhere in the United States. With a slew of high-minded officials in the public and private sector dedicated to racial harmony and justice, Detroit sought to present itself as a model for the nation in securing opportunity for all its citizens.

     Maraniss begins his 18-month chronicle with dual events on the same day in November 1962: the burning of an iconic Detroit area memorial to the automobile industry, the Ford Rotunda, a “quintessentially American harmonic convergence of religiosity and consumerism” (p.1-2); and, later that afternoon, a police raid on the Gotham Hotel, once the “cultural and social epicenter of black Detroit” (p.10), but by then considered to be a den of illicit gambling controlled by organized crime groups.  He ends with President Lyndon Johnson’s landmark address in May 1964 on the campus of nearby University of Michigan in Ann Arbor, where Johnson outlined his grandiose vision of the Great Society.  Johnson chose Ann Arbor as the venue to deliver this address in large measure because of its proximity to Detroit. No place seemed “more important to his mission than Detroit,” Maraniss writes, a “great city that honored labor, built cars, made music, promoted civil rights, and helped lift working people into the middle class” (p.360).

     Maraniss’ chronicle unfolds between these bookend events, revolving around on what had attracted President Johnson to the Detroit area in May 1964: building cars, making music, promoting civil rights, and lifting working people into the middle class. He skillfully weaves these strands into an affectionate, deeply researched yet easy-to-read portrait of Detroit during this 18-month golden period.  But Maraniss  does not ignore the fissures, visible to those perceptive enough to recognize them, which would lead to Detroit’s later unraveling.  Detroit may have found the right formula for bringing a middle class life style to working class Americans, black and white alike. But already Detroit was losing population as its white working class was taking advantage of newfound prosperity to leave the city for nearby suburbs.  Moreover, many in Detroit’s black community found the city to be anything but a model of racial harmony.

* * *

     An advertising executive described Detroit in 1963 as “intensely an automobile community – everybody lives, breathes, and sleeps automobiles. It’s like a feudal city ” (p.111). Maraniss’ inside account of Detroit’s automobile industry focuses principally upon the remarkable relationship between Ford Motor Company’s chief executive, Henry Ford II (sometimes referred to as “HF2” or “the Deuce”) and the head of the United Auto Workers, Walter Reuther, during this 18 month golden age (Manariss accords far less attention to the other two members of Detroit’s “Big Three,” General Motors and Chrysler, or to the upstart American Motors Corporation, whose chief executive, George Romney, was elected governor in November 1962 as a Republican). Ford and Reuther could not have been more different.

     Ford, from Detroit’s most famous industrial family, was a graduate of Hotchkiss School and Yale University who had been called home from military service during World War II to run the family business when his father Edsel Ford, then company president, died in 1943. Maraniss mischievously describes the Deuce as having a “touch of the peasant, with his manicured nails and beer gut and . . . frat-boy party demeanor” (p.28). Yet, Ford earnestly sought to modernize a company that he thought had grown too stodgy.  And, early in his tenure, he had famously said, “Labor unions are here to stay” (p.212).

      Reuther was a graduate of the “school of hard knocks,” the son of German immigrants whose father had worked in the West Virginia coalmines.   Reuther himself had worked his way up the automobile assembly line hierarchy to head its powerful union. George Romney once called Reuther the “most dangerous man in Detroit” (p.136). But Reuther prided himself on “pragmatic progressivism over purity, getting things done over making noise. . . [He was] not Marxist but Rooseveltian – in his case meaning as much Eleanor as Franklin” (p.136). Reuther believed that big government was necessary to solve big problems. During the Cold War, he won the support of Democratic presidents by “steering international trade unionists away from communism” (p.138).

     A quarter of a century after the infamous confrontation between Reuther and goons recruited by the Deuce’s grandfather Henry Ford to oppose unionization in the automobile industry — an altercation in which Reuther was seriously injured — the younger Ford’s partnership with Reuther blossomed. Rather than bitter and violent confrontation, the odd couple worked together to lift huge swaths of Detroit’s blue-collar auto workers into the middle class – arguably Detroit’s most significant contribution to American society in the second half of the 20th century. “When considering all that Detroit has meant to America,” Maraniss writes, “it can be said in a profound sense that Detroit gave blue-collar workers a way into the middle class . . . Henry Ford II and Walter Reuther, two giants of the mid-twentieth century, were essential to that result” (p.212).

      Reuther was aware that, despite higher wages and improved benefits, life on the assembly lines remained “tedious and soul sapping if not dehumanizing and dangerous” for autoworkers (p.215). He therefore consistently supported improving leisure time for workers outside the factory.  Music was one longstanding outlet for Detroiters, including its autoworkers. The city’s rich history of gospel, jazz and rhythm and blues musicians gave Detroit an “unmatched creative melody” (p.100), Maraniss observes.   By the early 1960s, Detroit’s musical tradition had become identified with the work of Motown founder, mastermind and chief executive, Berry Gordy Jr.

     Gordy was an ambitious man of “inimitable skills and imagination . . . in assessing talent and figuring out how to make it shine” (p.100).  Gordy aimed to market his Motown sound to white and black listeners alike, transcending the racial confines of the traditional rhythm and blues market. He set up what Maraniss terms a “musical assembly line” that “nurtured freedom through discipline” (p.195) for his many talented performers. The songs which Gordy wrote and championed captured the spirit of working class life: “clear story lines, basic and universal music for all people, focusing on love and heartbreak, work and play, joy and pain” (p.53).

     Gordy’s team included a mind-boggling array of established stars: Mary Wells, Marvin Gaye, Smokey Robinson and his Miracles, Martha Reeves and her Mandelas, Diana Ross and her Supremes, and the twelve-year-old prodigy, Little Stevie Wonder.  Among Gordy’s rising future stars were the Temptations and the Four Tops. The Motown team was never more talented than in the summer of 1963, Maraniss contends. Ten Motown singles rose to Billboard’s Top 10 that year, and eight more to the Top 20.  Wonder, who dropped “Little” before his name in 1963, saw his “Fingertips Part 2” rocket up the charts to No. 1.  Martha and the Vandellas made their mark with “Heat Wave,” a song with “irrepressibly joyous momentum” (p.197).  But the title could have referred equally to the rising intensity of the nationwide quest for racial justice and civil rights for African-Americans that summer.

      In June 1963, nine weeks before the 1963 March on Washington, Maraniss reminds us that Dr. Martin Luther King, Jr. delivered the outlines of his famous “I Have a Dream” speech at the end of a huge Detroit “Walk to Freedom” rally that took place almost exactly 20 years after a devastating racial confrontation between blacks and whites in wartime Detroit. The Walk drew an estimated 100,000 marchers, including a significant if limited number of whites. What King said that June 1963 afternoon, Maraniss writes, was “virtually lost to history, overwhelmed by what was to come, but the first time King dreamed his dream at a large public gathering, he dreamed it in Detroit” (p.182). Concerns about disorderly conduct and violence preceded both the Detroit Walk to Freedom and the March on Washington two months later. Yet, the two  events were for all practical purposes free of violence.  Just as the March on Washington energized King’s non-violent quest for Civil Rights nation-wide, the Walk to Freedom buoyed Detroit’s claim to be a model of racial justice in the urban north.

      In the Walk for Freedom and in the nationwide quest for racial justice, Walter Reuther was an unsung hero. Under Reuther’s leadership, the UAW made an “unequivocal moral and financial commitment to civil rights action and legislation” (p.126).   Once John Kennedy assumed the presidency, Reuther consistently pressed the administration to move on civil rights.  The White House in turn relied on Reuther to serve as a liaison to black civil rights leaders, especially to Dr. King and his southern desegregation campaign. The UAW functioned as what Maraniss  terms the “bank” (p.140) of the Civil Right movement, providing needed funding at critical junctures. To be sure, Maraniss emphasizes, not all rank-and-file UAW members shared Reuther’s passionate commitment to the Walk for Freedom, the March on Washington, or to the cause of civil rights for African-Americans.

     Even within Detroit’s black community, not all leaders supported the Walk for Freedom. Maraniss  provides a close look at the struggle between the Reverend C.L. Franklin and the Reverend Albert Cleage for control over the details of the March for Freedom and, more generally, for control over the direction of the quest for racial justice in Detroit. Reverend Franklin, Detroit’s “flashiest and most entertaining preacher” (p.12; also the father of singer Aretha, who somehow escaped Gordy’s clutches to perform for Columbia Records and later Atlantic), was King’s closest ally in Detroit’s black community. Cleage, whose church later became known as the Shrine of the Black Madonna, founded on the belief that Jesus was black, was not wedded to Dr. King’s brand of non-violence. Cleage sought to limit the influence of Reuther, the UAW and whites generally in the Walk for Freedom. Franklin was able to retain the upper hand in setting the terms and conditions for the June 1963 rally.  But the dispute between Reverends Franklin and Cleage reflected the more fundamental difference between black nationalism and Martin Luther King style integration, and was thus an “early formulation of a dispute that would persist throughout the decade” (p.232),

     In November of 1963, Cleage sponsored a conference that featured black nationalist Malcolm X’s “Message to the Grass Roots,” an important if less well known counterpoint to King’s “I Have A Dream” speech in Washington in August of that year.  In tone and substance, Malcolm’s address “marked a break from the past and laid out a path for the black power movement to follow from then on” (p.279). Malcolm referred in his speech to the highly publicized police killing of prostitute Cynthia Scott the previous summer, which had generated outrage throughout Detroit’s black community and exacerbated long simmering tensions between the community and a police force that was more than 95% white.

     Scott’s killing “discombobulated the dynamics of race in the city. Any communal black and white sensibility resulting from the June 23 [Walk to Freedom] rally had dissipated, and the prevailing feeling was again us versus them” (p.229).  The tension between police and community did not abate when Police Commissioner George Edwards, a long standing liberal who enjoyed strong support within the black community, considered the Scott case carefully and ruled that the shooting was “regrettable and unwise . . . but by the standards of the law it was justified” (p.199).

      Then there was the contentious issue of a proposed Open Housing ordinance that would have forbidden property owners from refusing to sell their property on the basis of race. The proposed ordinance required passage from the city’s nine person City Council, elected at large in a city that was one-third black – no one on the council represented directly the city’s black neighborhoods. The proposal was similar in intent to future national legislation, the Fair Housing Act of 1968, and had the enthusiastic support of Detroit’s progressive Mayor, Jerome Cavanaugh, a youthful Irish Catholic who deliberately cast himself as a mid-western John Kennedy.

      But the proposal evoked bitter opposition from white homeowner associations across the city, revealing the racial fissures within Detroit. “On one side were white homeowner groups who said they were fighting on behalf of individual rights and the sanctity and safety of their neighborhoods. On the other side were African American churches and social groups, white and black religious leaders, and the Detroit Commission on Community Relations, which had been established . . . to try to bridge the racial divide in the city” (p.242).   Notwithstanding the support of the Mayor and leaders like Reuther and Reverend Franklin, white homeowner opposition doomed the proposed ordinance. The City Council rejected the proposal 7-2, a stinging rebuke to the city’s self-image as a model of racial progress and harmony.

       Detroit’s failed bid for the 1968 Olympics was an equally stinging rebuke to the self-image of a city that loved sports as much as music. Detroit bested more glamorous Los Angeles for the right to represent the United States in international competition for the games. A delegation of city leaders, including Governor Romney and Mayor Cavanaugh, traveled to Baden Baden, Germany, where they made a well-received presentation to the International Olympic Committee. While Detroit was making its presentation, the Committee received a letter from an African American resident of Detroit who alluded to the Scott case and the failed Open Housing Ordinance to argue against awarding the games to the city on the ground that fair play “has not become a living part of Detroit” (p.262). Although bookmakers had made Detroit a 2-1 favorite for the 1968 games, the Committee awarded them to Mexico City. Its selection was based largely upon what Maraniss considers Cold War considerations, with Soviet bloc countries voting against Detroit. The delegation dismissed the view that the letter to the Committee might have undermined Detroit’s bid, but its actual effect on the Committee’s decision remains undetermined.

         Maraniss asks whether Detroit might have been able to better contain or even ward off the devastating 1967 riots had it been awarded the 1968 Olympic games. “Unanswerable, but worth pondering” is his response (p.271). In explaining the demise of Detroit, many, myself included, start with the 1967 riots which in a few short but violent days destroyed large swaths of the city, obliterating once solid neighborhoods and accelerating white flight to the suburbs.  But Maraniss emphasizes that white flight was already well underway long before the 1967 disorders. The city’s population had dropped from just under 1.9 million in the 1950 census to 1.67 million in 1960. In January of 1963, Wayne State University demographers published “The Population Revolution in Detroit,” a study which foresaw an even more precipitous emigration of Detroit’s working class in the decades ahead. The Wayne State demographers “predicted a dire future long before it became popular to attribute Detroit’s fall to a grab bag of Rust Belt infirmities, from high labor costs to harsh weather, and before the city staggered from more blows of municipal corruption and incompetence. Before any of that, the forces of deterioration were already set in motion” (p..91). Only a minor story in January 1963, the findings and projections of the Wayne State study in retrospect were of “startling importance and haunting prescience” (p.89).

* * *

      My high school classmates are likely to find Maraniss’ book a nostalgic trip down memory lane: his 18 month period begins with our senior year in a suburban Detroit high school and ends with our freshman college year — our own time of soaring youthful dreams, however unrealistic. But for those readers lacking a direct connection to the book’s time and place, and particularly for those who may still think of Detroit only as an urban basket case, Maraniss provides a useful reminder that it was not always thus.  He nails the point in a powerful sentence: “The automobile, music, labor, civil rights, the middle class – so much of what defines our society and culture can be traced to Detroit, either made there or tested there or strengthened there” (p.xii).  To this, he could have added, borrowing from Martha and the Vandellas’ 1964 hit, “Dancing in the Streets,” that America can’t afford to forget the Motor City.

 

                   Thomas H. Peebles

Berlin, Germany

October 28, 2016

9 Comments

Filed under American Politics, American Society, United States History

Blithe Optimist

reagan-1

reagan-2

Rick Perlstein, The Invisible Bridge:

The Fall of Nixon and the Rise of Reagan

     Rick Perlstein has spent his career studying American conservatism in the second half of the 20th century and its capture of the modern Republican Party. His first major work, Before the Storm: Barry Goldwater and the Unmaking of the American Consensus, was an incisive and entertaining study of Senator Barry Goldwater’s 1964 Republican Party nomination for the presidency and his landslide loss that year to President Lyndon Johnson. He followed with Nixonland: The Rise of a President and the Fracturing of America, a description of the nation at the time of Richard Nixon’s landslide 1972 victory over Senator George McGovern  — a nation divided by a cultural war between “mutually recriminating cultural sophisticates on the one hand and the plain, earnest ‘Silent Majority’ on the other” (p.xix). Now, in The Invisible Bridge: The Fall of Nixon and the Rise of Reagan, Perlstein dives into American politics between 1973 and 1976, beginning with Nixon’s second term and ending with the failed bid of the book’s central character, Ronald Reagan, for  the 1976 Republican Party presidential nomination.

     The years 1973 to 1976 included the Watergate affair that ended the Nixon presidency in 1974; the ultra-divisive issue of America’s engagement in Vietnam, which ended in an American withdrawal from that conflict in 1975; and the aftershocks from the cultural transformations often referred to as “the Sixties.” It was a time, Perlstein writes, when America “suffered more wounds to its ideal of itself than at just about any other time in its history” (p.xiii). 1976 was also the bi-centennial year of the signing of the Declaration of Independence, which the nation approached with trepidation. Many feared, as Perlstein puts it, that celebration of the nation’s 200 year anniversary would serve the “malign ideological purpose of dissuading a nation from a desperately needed reckoning with the sins of its past” (p.712).

     Perlstein begins by quoting advice Nikita Khrushchev purportedly provided to Richard Nixon: “If the people believe there’s an imaginary river out there, you don’t tell them there’s no river there. You build an imaginary bridge over the imaginary river.” Perlstein does not return to Khrushchev’s advice and, as I ploughed through his book, I realized that I had not grasped how the notion of an “invisible bridge” fits into his lengthy (804 pages!) narrative. More on that below. There’s no mystery, however, about Perlstein’s sub-title “The Fall of Nixon and the Rise of Reagan.”

     About one third of the book addresses Nixon’s fall in the Watergate affair and another third recounts Reagan’s rise to challenge President Gerald Ford for the 1976 Republican Party presidential nomination, including the year’s presidential primaries and the maneuvering of the Ford and Reagan presidential campaigns at the Republican National Convention that summer. The remaining third consists of biographical background on Reagan and his evolution from a New Deal liberal to a conservative Republican; an examination of the forces that were at work in the early 1970s to mobilize conservatives after Goldwater’s disastrous 1964 defeat; and Perlstein’s efforts to describe the American cultural landscape in the 1970s and capture the national mood, through a dazzling litany of vignettes and anecdotes. At times, it seems that Perlstein has seen every film that came to theatres in the first half of the decade; watched every television program from the era; and read every small and mid-size town newspaper.

     Perlstein describes his work as a “sort of biography of Ronald Reagan – of Ronald Reagan, rescuer” (p.xv) — rescuer, presumably, of the American psyche from the cultural convulsions of the Sixties and the traumas of Watergate and Vietnam that had shaken America’s confidence to the core. Perlstein considers Reagan to have been a gifted politician who exuded a “blithe optimism in the face of what others called chaos” (p.xvi), with an uncanny ability to simplify complex questions, often through stories that could be described as homespun or hokey, depending upon one’s perspective. Reagan was an “athlete of the imagination,” Perlstein writes, who was “simply awesome” at “turning complexity and confusion and doubt into simplicity and stout-heartedness and certainty” (p.48). This power was a key to “what made others feel so good in his presence, what made them so eager and willing to follow him – what made him a leader. But it was why, simultaneously, he was such a controversial leader” (p.xv).   Many regarded Reagan’s blithe optimism as the work of a “phony and a hustler” (p.xv). At bottom, Reagan was a divider and not a uniter, Perlstein argues, and “understanding the precise ways that opinions about him divided Americans . . . better helps us to understand our political order of battle today: how Americans divide themselves from one another” (p.xvi).

* * *

     In a series of biographical digressions, Perlstein demonstrates how Reagan’s blithe mid-western optimism served as the foundation for a long conversion to political conservatism.  Perlstein begins with Reagan’s upbringing in Illinois, his education at Illinois’ Eureka College, and his early years as a sportscaster in Iowa. Reagan left the mid-west in 1937 for Hollywood and a career in films, arriving in California as a “hemophiliac, bleeding heart liberal” (p.339). But, during his Hollywood years, Reagan came to see Communist Party infiltration of the film industry as a menace to the industry’s existence. He was convinced that Communist actors and producers had mastered the subtle art of making the free enterprise system look bad and thereby were undermining the American way of life.   Reagan became an informant for the FBI on the extent of Communist infiltration of Hollywood, a “warrior in a struggle of good versus evil – a battle for the soul of the world” (p.358), as Perlstein puts it. Reagan further came to resent the extent of taxation and viewed the IRS as a public enemy second only to Communists.

     Yet, Reagan remained a liberal Democrat through the 1940s. In 1948, he worked for President Truman’s re-election and introduced Minneapolis mayor Hubert Humphrey to a national radio audience. In 1952, Reagan supported Republican Dwight Eisenhower’s bid for the presidency. His journey toward the conservative end of the spectrum was probably completed when he became host in 1954 of General Electric’s “GE Theatre,” a mainstay of early American television. One of America’s corporate giants, GE’s self-image was of a family that functioned in frictionless harmony, with the interests of labor and management miraculously aligned. GE episodes, Perlstein writes, were the “perfect expression” of the 1950s faith that nothing “need ever remain in friction in the nation God had ordained to benevolently bestride the world” (p.395). Reagan and his blithe optimism proved to be a perfect fit with GE Theatre’s mission of promoting its brand of Americanism, based on low taxes, unchallenged managerial control, and freedom from government regulatory interference.

     In the 1960 presidential campaign, Reagan depicted the progressive reforms which Democratic nominee John Kennedy advocated as being inspired by Karl Marx and Adolf Hitler. Richard Nixon, Kennedy’s rival, noted Reagan’s evolution and directed his staff to use Reagan as a speaker “whenever possible. He used to be a liberal” (p.374). By 1964, Reagan had become a highly visible backer of Barry Goldwater’s presidential quest, delivering a memorable speech in support of the candidate at the Republican National Convention. Reagan went on to be elected twice as governor of California, in 1967 and 1971.

     While governor, Reagan consistently argued for less government.  Our highest national priority, he contended at a national governors’ conference in 1973, should be to “halt the trend toward bigger, more expensive government at all levels before it is too late . . . We as citizens will either master government as our servant or ultimately it will master us” (p.160). Almost alone among conservatives, Reagan projected an image of a “pleasant man who understands why people are angry” (p.604), as one commentator put it. He gained fame if not notoriety during his tenure as governor for his hard line opposition to student protesters, particularly at the University of California’s Berkeley campus, attracting scores of working class Democrats who had never previously voted for a Republican. “Part of what made Berkeley [student unrest] such a powerful issue for traditionally Democratic voters was class resentment – something Ronald Reagan understood in his bones” (p.83).

     Early in Reagan’s second term as California’s governor, on June 17, 1972, four burglars were caught attempting to break into the Democratic national headquarters in Washington’s Watergate office and apartment complex. Throughout the ensuing investigation, Reagan seemed indifferent to what Time Magazine termed “probably the most pervasive instance of top-level misconduct in [American] history” (p.77).

* * *

     Watergate to Reagan was part of the usual atmosphere of campaigning, not much more than a prank.  Upon first learning about the break-in, he quipped that the Democrats should be happy that someone considered their documents worth reading. Throughout the investigation into corruption that implicated the White House, Reagan maintained a stubborn “Christian charity to a a fallen political comrade” (p.249). The individuals involved, he argued, were “not criminals at heart” (p.81). He told conservative commentators Rowland Evans and Robert Novak that he found “no evidence of criminal activity” in Watergate, which was why Nixon’s detractors were training their fire on “vague areas like morality and so forth” (p.249-50). Alone among political leaders, Reagan insisted that Watergate “said nothing important about the American character” (p.xiv).

     Thus, few were surprised when Reagan supported President Gerald Ford’s widely unpopular presidential pardon of Nixon for any crimes he might have committed related to Watergate, issued one month after Nixon’s resignation. Nixon had already suffered “punishment beyond anything any of us could imagine” (p.271), Reagan argued. Ford’s pardon of Nixon dissipated the high level of support that he had enjoyed since assuming the presidency, sending his public approval ratings from near record highs to near new lows. Democrats gained a nearly 2-1 advantage in the House of Representatives in the 1974 mid-term elections and Reagan’s party “seemed near to death” (p.329).

     As Ford’s popularity waned, Reagan saw an opportunity to challenge the sitting president. He announced his candidacy in November 1975. Reagan said he was running against what he termed a “buddy system” in Washington, an incestuous network of legislators, bureaucrats, and lobbyists which:

functions for its own benefit – increasingly insensitive to the needs of the American worker, who supports it with his taxes. . . I don’t believe for one moment that four more years of business as usual in Washington is the answer to our problems, and I don’t believe the American people believe it, either (p.547).

With Reagan’s bid for the 1976 Republican nomination, Perlstein’s narrative reaches its climatic conclusion.

* * *

     The New York Times dismissed the presidential bid as an “amusing but frivolous Reagan fantasy” and wondered how Reagan could be “taken so seriously by the news media” (p.546). Harper’s termed Reagan the “Candidate from Disneyland” (p.602), labeling him “Nixon without the savvy or self pity. . . That he should be regarded as a serious candidate for President is a shame and embarrassment” (p.602). Commentator Garry Wills responded to Reagan’s charge that the media was treating him unfairly by conceding that it was indeed “unfair to expect accuracy or depth” from Reagan (p.602). But, as Perlstein points out, these comments revealed “more about their authors than they did about the candidate and his political prospects” (p.602), reflecting what he terms elsewhere the “myopia of pundits, who so frequently fail to notice the very cultural ground shifting beneath their feet” (p.xv).

     1976 proved to be the last year either party determined its nominee at the convention itself, rather than in advance. Reagan went into the convention in Kansas City as the most serious threat to an incumbent president since Theodore Roosevelt had challenged William Howard Taft for the Republican Party nomination in 1912. His support in the primaries and at the convention benefitted from a conservative movement that had come together to nominate Barry Goldwater in 1964, a committed “army that could lose a battle, suck it up, and then regroup to fight a thousand battles more” (p.451) — “long memoried elephants” (p.308), Perlstein terms them elsewhere.

     In the years since the Goldwater nomination, evangelical Christians had become more political, moving from the margins to the mainstream of the conservative movement. Evangelical Christians were behind an effort to have America declared officially a “Christian nation.” Judicially-imposed busing of school students to achieve greater racial balance in public schools precipitated a torrent of opposition in cities as diverse as Boston, Massachusetts and Louisville, Kentucky – the Boston opposition organization was known as ROAR, Restore our Alienated Rights. Perlstein also traces the conservative reaction to the Supreme Court’s 1973 Roe v. Wade decision, which recognized a constitutional right to abortion. The 1976 Republican party platform for the first time recommended a Human Rights amendment to the constitution to reverse the decision.

     Activist Phyllis Schlafly, who died just weeks ago, led a movement to derail the proposed Equal Rights Amendment, intended to establish gender equality as a constitutional mandate. Schafly’s efforts contributed to stopping the proposed amendment at a time when approval of only three additional states would have officially adopted the amendment as part of the federal constitution (“Don’t Let Satan Have Its Way – Stop the ERA” was the opposition slogan, as well as Perlstein’s title for a chapter on the subject). Internationally, conservatives opposed the Ford administration’s intention to relinquish to Panama control of the Panama Canal; and the policy of détente toward the Soviet Union which both the Nixon and Ford administrations pursued.

     Enabling the long-memoried elephants was Richard Viguerie, a little known master of new technologies for fund-raising and grass roots get-out-the-vote campaigns. Conservative opinion writers like Patrick Buchanan, former Nixon White House Communications Director, and George Will also enjoyed expanded newspaper coverage. A fledgling conservative think tank based in Washington, the Heritage Foundation, became a repository for combining conservative thinking and action. The Heritage Foundation assisted a campaign in West Virginia to purge school textbooks of “secular humanism.”

     With the contest for delegates nearly even as the convention approached, Reagan needed the support of conservatives for causes like these. But Reagan also realized that limited support from centrist delegates could prove to be his margin of difference. In a bid to attract such delegates, especially from the crucial Pennsylvania delegation, Reagan promised in advance of the convention to name Pennsylvania Senator Richard Schweiker as his running mate. Schweiker came from the moderate wing of the party, with a high rating from the AFL-CIO. But the move backfired, infuriating conservatives — North Carolina Senator Jesse Helms in particular — with few moderate delegates switching to Reagan.   Then, Reagan’s supporters proposed a change to the convention’s rules that would have required Ford to announce his running mate prior to the presidential balloting, forcing Ford to anger either the moderate or conservative faction of the party. Ford supporters rejected the proposal, which lost on the full floor after a close vote.

     The 150 delegates of the Mississippi delegation proved to be crucial in determining the outcome of the convention’s balloting. When the Mississippi delegation cast its lot with Ford, the president had a sufficient number of delegates to win the nomination on the first ballot, 1187 votes to 1070 for Reagan. Ford selected Kansas Senator Robert Dole as his running mate, after Vice President Nelson Rockefeller, whom conservatives detested, announced the previous fall that he did not wish to be a candidate for Vice President. Anxious to achieve party unity, Ford invited Reagan to join him on the platform following his acceptance speech. Reagan gave an eloquent impromptu speech that many thought overshadowed Ford’s own acceptance address.

* * *

     Perlstein includes a short, epilogue-like summation to the climatic Kansas City convention: Ford went on to lose to Democratic governor from Georgia Jimmy Carter in a close 1976 general election and Reagan emerged as the undisputed leader of his party’s conservative wing. But as the book ended, I found myself still asking how the notion of an “invisible bridge” fits into this saga. My best guess is that the notion is tied to Perlstein’s description of Reagan as a “rescuer.”  Reagan’s failed presidential campaign was a journey across a great divide – over an invisible bridge.

     On the one side were Watergate, the Vietnam War, repercussions from the Sixties and, for conservatives, Goldwater’s humiliating 1964 defeat. On the other side was the promise of an unsullied way forward.  Reagan’s soothing cult of optimism offered Americans a message that could allow them to again view themselves and their country positively.  There were no sins that Reagan’s America need atone for. Usually dour and gloomy conservatives — Perlstein’s “long memoried elephants” — also saw in Reagan’s buoyant   message the discernible path to power that had eluded them in 1964.. But, as Perlstein will likely underscore in a subsequent volume, many still doubted whether the blithe optimist had the temperament or the intellect to be president, while others suspected that his upbeat brand of conservatism could no more be sold to the country-at-large than the Goldwater brand in 1964.

Thomas H. Peebles

La Châtaigneraie, France

October 2, 2016

 

 

 

5 Comments

Filed under American Politics, American Society, Biography