Category Archives: United States History

Two Who Embodied That Sweet Soul Music


Jonathan Gould, Otis Redding: An Unfinished Life 

Tony Fletcher, In the Midnight Hour: The Life and Soul of Wilson Pickett 

        By 1955, the year I turned 10, I had already been listening to popular music for a couple of years on a small bedside radio my parents had given me. My favorite pop singers were Patti Page and Eddie Fisher, whose soft, staid, melodious songs seemed in tune with the Big Band and swing music of my parents’ generation. The previous year, 1954, a guy named Bill Haley had come out of nowhere onto the popular music scene with “Rock Around the Clock,” which he followed in 1955 with “Shake, Rattle, and Roll.” Haley’s two hits became the centerpiece of my musical world. They were so different: they moved, they jumped – they rocked and they rolled! – in a way that resembled nothing I had heard from Page, Fisher and their counterparts.

        The term “rock ‘n roll” was already in use in 1955 to describe the new style that Haley’s songs represented. But “Rock Around the Clock” and “Shake, Rattle, and Roll” were not the only hit tunes I listened to that year that seemed light years apart from what I had been familiar with. There was Ray Charles, with “I Got a Woman;” Chuck Berry with “Maybelline;” and, most exotic of all, a man named Richard Penniman, known in the record world as “Little Richard,” rose to fame with a song titled “Tutti Frutti.” What I didn’t realize then was that Charles, Berry and Penniman were African-Americans, whereas Haley was a white guy, and that Charles and his counterparts were bringing their brand of popular music, then officially called “rhythm and blues” (and more colloquially “R & B”) into the popular music mainstream on a massive scale for white listeners like me.  Within a decade after that breakthrough year of 1955, “soul music” had largely supplanted “rhythm and blues” as the term of choice to refer to African-American popular music.

          Also listening to Charles, Berry and Penniman in 1955 were two African-American teenagers from the American South, both born in 1941, both named for their fathers: Otis Redding, Jr., and Wilson Pickett, Jr.  Redding was from Macon, Georgia (as was “Little Richard” Penniman). Pickett was from rural Alabama, but lived a substantial part of his adolescence with his father in Detroit. Each had already shown talent for gospel singing, which was then becoming a familiar pathway for African-Americans into secular rhythm and blues, and thus into the burgeoning world of popular music. A decade later, the two found themselves near the top of a staggering alignment of talent in the popular music world.

          As I look back at the period that began in 1955 and ended around 1970, I now see a golden era of American popular music.  It saw the rise of Elvis Presley, the Beatles, the Rolling Stones, and Bob Dylan, along with oh so many stellar practitioners of that “sweet soul music,” to borrow from the  title of a 1967 hit which Redding helped develop. Ray Charles, Chuck Berry, and Little Richard Penniman may have jump-started the genre in that pivotal year 1955, but plenty of others were soon competing with these pioneers: Sam Cooke, James Brown (another son of Macon, Georgia), Fats Domino, Marvin Gaye, the Platters, the Temptations, Clyde McPhatter and later Ben E. King and the Drifters, Curtis Mayfield and the Impressions, and Smokey Robinson and the Miracles were among the most prominent male stars, while Aretha Franklin, Mary Wells, Dionne Warwick, the Marvellettes, the Shirelles, Diana Ross and the Supremes, and Martha Reeves and the Vandellas were among the women who left their imprint upon this golden era.

          But if I had to pick two songs that represented the quintessence of that sweet soul music in this golden era, my choices would be Pickett’s “In the Midnight Hour,” and Redding’s “Sittin’ on the Dock of the Bay,” two songs that to me still define and embody soul music. Two recent biographies seek to capture the men behind these irresistible voices: Jonathan Gould’s Otis Redding: An Unfinished Life, and Tony Fletcher’s In the Midnight Hour: The Life and Soul of Wilson Pickett.  Despite Redding and Pickett’s professional successes, their stories are both sad, albeit in different ways.

         Gould’s title reminds us that Redding died before the end of the golden age, in a plane crash in Wisconsin in December 1967, at age 26, as his career was soaring.  Pickett in Fletcher’s account had peaked by the end of the 1960s, with his career thereafter going into a steep downward slide. Through alcohol and drugs, Pickett destroyed himself and several people around him. Most tragically, Pickett physically abused the numerous women in his life. Pickett died in January 2006 at age of 64 of a heart attack, most likely brought about at least in part by years of substance abuse.

        Popular music stars are rarely like poets, novelists, even politicians who leave an extensive written record of their thoughts and activities.   The record for most pop music stars consists primarily of their records.  Gould, more handicapped than Fletcher in this regard given Redding’s premature death in 1967, gets around this obstacle by producing a work that is only about one-half Otis Redding biography.  The other half of his work provides a textbook overview of African-American music in the United States and its relationship to the condition of African-Americans in the United States.

        Unlike many of their peers, neither Redding nor Pickett manifested much outward interest in the American Civil Rights movement that was underway as their careers took off and peaked. But the story of African-American singers beginning their careers in the 1950s and rising to prominence in the lucrative world of 1960s pop music cannot be told apart from that movement.  At every phase of his story of Otis Redding, Gould reminds readers what was going on in the quest for African-American equality: Rosa Parks and the Montgomery bus boycott, Dr. Martin Luther King’s marches, Civil Rights legislation passed under President Lyndon Johnson, and the rise of Malcolm X’s less accommodating message about how to achieve full equality are all part of Gould’s story, as are the day-to-day indignities that African-American performers endured as they advanced their careers.  Fletcher does not ignore this background – no credible biographer of an African-American singer in the ‘50s and ‘60s could – but it is less prominent in his work.

        More than Fletcher, Gould also links African-American music to African-American history.  He treats the role music played for African-Americans in the time of slavery, during Reconstruction, during the Jim Crow era, and into the post-World War II and modern Civil Rights era. Gould’s overview of African-American history through the lens of African-American music alone makes his book worth reading, and may give it particular appeal to readers from outside the United States who know and love American R&B and soul music, but are less familiar with the historical and sociological context in which it emerged.  But both writers provide lively, detailed accounts of the 1950s and 1960s musical scene in which Redding and Pickett rose to prominence.  Just about every soul music practitioner whom I admired in that era makes an appearance in one or both books.  The two books should thus constitute a welcome trip down memory lane for those who still love that sweet soul music.

* * *

        Otis Redding grew up in a home environment far more stable than that of Wilson Pickett.  Otis was the fourth child, after three sisters, born to Otis Sr. and his wife Fannie. Otis Sr. had serious health issues, but worked while he could at Robbins Air Force base, just outside Macon, Georgia.  Although only minimally educated, Otis Sr. and Fannie saw education as the key to a better future for their children.  They were particularly disappointed when Otis Jr. showed little interest in his studies and dropped out of high school at age 15. As an adolescent, Otis Jr. was known as a “big talker and a good talker, someone who could ‘run his mouth’ and hold his own in the endless arguments and verbal contests that constituted a prime form of recreation among people who quite literally didn’t have anything better to talk about” (Gould, p.115; hereafter “G”).

        Wilson Pickett was one of 11 children born into a family of sharecroppers, barely surviving in the rigidly segregated world of rural Alabama.  When Wilson, Jr. was seven, his father took the family to Detroit, Michigan, in search of a better life, and landed a job at Ford Motor Company. But the family came apart during the initial time in Detroit. His mother Lena returned to Alabama, and young Wilson ended up spending time in both places.  Wilson was subject to harsh discipline at home at the hands of both his mother and his father and grew into an irascible young man, quick to anger and frequently involved as an adolescent in physical altercations with classmates and friends.  His irascibility “provoked ever more harsh lashings, and because these still failed to deter him, it created an especially vicious cycle,” Fletcher writes, with the excessive violence Wilson later perpetrated on others representing a “continuation of the way he had been raised” (Fletcher, p.17; hereafter “F”). For a while, Pickett attended Detroit’s Northwestern High School, where future soul singers Mary Wells and Florence Ballard were also students. But Pickett, like Redding, did not finish high school.

         Both married young. Otis married his childhood sweetheart Zelma Atwood at about the time he should have been graduating from high school, when Zelma was pregnant with their second child.  Otis arrived more than an hour late for his wedding. Despite this less-than-promising beginning, he stayed married to Zelma for the remainder of his unfinished life and became a loyal and dedicated father to two additional children. Pickett married his girlfriend Bonnie Covington at age 18, when she too was pregnant. The couple stayed technically married until 1986, but spent little time together. Pickett’s relationships with his numerous additional female partners throughout his adult life all ended badly.

        Pickett discovered his singing talent through gospel music both in church in rural Alabama and on the streets of Detroit.  In the rigidly segregated South, Fletcher explains, the African-American church provided schooling, charity and community, along with an opportunity to listen to and participate in music.  Gospel was often the only music that young African-Americans in the 1940s and early 1950s were exposed to. “No surprise, then, that for a young Wilson Pickett, gospel music became everything” (F., p.18).  Similarly, it was “all but inevitable that Otis Redding would chose to focus his early musical energies on gospel singing” (G., p.62) at the Baptist Church in Macon which his parents attended.

       Redding gained attention as a 16-year old for his credible imitations of Little Richard. Soon after, he was able to replicate fluently the major R & B songs of the late 1950s. Through a neighborhood friend, Johnny Jenkins, a skilled guitarist, Redding joined a group called the Pinetoppers which played at local African American clubs – dubbed the “Chitlin’ circuit” – and earned money playing at all white fraternity parties at Mercer University in Macon and the University of Georgia.  Redding also spent a short time in Los Angeles visiting relatives, where he fell under the spell of Sam Cooke. Pickett started singing professionally in Detroit with a group known as the Falcons, which also featured Eddie Floyd, who would later go on to record “Knock on Wood,” a popular hit of the mid-60s.  Pickett’s first solo recording came in 1962, “If You Need Me.”

          Redding and Pickett in these two accounts had little direct interaction, and although they looked upon one another as rivals as their careers took off, each appears to have had a high degree of respect for the other. But each had a contract with Atlantic Records, and their careers thus followed closely parallel tracks.  Based in New York, Atlantic signed and marketed some of the most prominent R & B singers of the late 1950s and early 1960s, including Ray Charles and Aretha Franklin (whose charms were felt by both Redding and Pickett), along with several leading jazz artists and a handful of white singers. By the mid-1960s, Atlantic and its Detroit rival, Berry Gordy’s Motown Records, dominated the R & B sector of American popular music.

       Both men’s careers benefitted from the creative marketing of Jerry Wexler, who joined Atlantic in 1953 after working for Billboard Magazine (where he had coined the term “rhythm and blues” to replace “race music” as a category for African American music). Atlantic and Wexler cultivated licensing arrangements with smaller recording companies where both Redding and Pickett recorded, including Stax in Memphis, Tennessee, and Fame in Muscle Shoals, Alabama.  Redding and Pickett’s relationships with Wexler at Atlantic, and with a colorful cast of characters at Stax and Fame, play a prominent part in the two biographies.

          But the most affecting business relationship in the two books is that which Redding established with Phil Walden, his primary manger and promoter during his entire career. Walden, a white boy from Macon the same age as Redding, loved popular music of all types and developed a particular interest in the burgeoning rhythm and blues style.  Phil initially booked Otis to sing at fraternity parties at all-white Mercer University in Macon, where he was a student, and somehow the two young men from different worlds within the same hometown bonded. Gould uses the improbable Redding-Walden relationship to illustrate how complex black-white relationships could be in the segregated South, and how the two young men navigated these complexities to their mutual benefit.

       In 1965, Pickett produced his first hit, “In the Midnight Hour,” “perhaps the song most emblematic of the whole southern soul era” (F., p.74). The song appealed to the same white audiences that were listening the Beatles, the Rolling Stones and the other British invasion bands. It was “probably the first southern soul recording to have such an effect on such a young white audience,” Fletcher writes, “yet it was every bit an authentic rhythm and blues record too, the rare kind of single that appealed to everyone without compromising” (F., p.76).

         Pickett had had three major hits the following year, 1966: “634-5789,” “Land of 1,000 Dances,” and “Mustang Sally.” The first two rose to #1 on the R & B charts.  Although “634-5789” was in Fletcher’s terms a “blatant rip-off” of the Marvellettes’ “Beechwood 4-5789” and the “closest Pickett would ever come to sounding like part of Motown” (F., p.80), it surpassed “In the Midnight Hour” in sales. In 1968, Pickett turned the Beatles’ “Hey Jude” into his own hit. He also made an eye-opening trip to the newly independent African nation of Ghana, as part of a “Soul to Soul” group that included Ike and Tina Turner and Roberta Flack.  Pickett’s “In the Midnight Hour” worked the 100,000 plus crowd into a frenzy, Fletcher recounts. Pickett was the “ticket that everyone wanted to see” (F., p.169) and his performance in Ghana may have marked his career’s high point (although the tour included an embarassing low point when Pickett and Ike Turner got into a fight over dressing room priorities).

          “Dock of the Bay,” the song most closely identified with Otis Redding, was released in 1968, and became the only posthumous number one hit in American music history.  At the time of his death in late 1967, Redding had firmly established his reputation with a remarkable string of hits characterized by powerful emotion and depth of voice: “Try a Little Tenderness,” “These Arms of Mine,” “Pain in My Heart,” “Mr. Pitiful,” and “I’ve Been Loving You Too Long.” Like Pickett’s “Hey Jude,” a Beatles’ hit, Redding also “covered,” to use the music industry term, the Rolling Stones’ signature hit, “Satisfaction,” with his own idiosyncratic version.  Pickett’s “Hey Jude,” and Redding’s “Satisfaction,” the two authors note, deftly reversed a trend in popular music, in which for years white singers had freely appropriated African-American singers’ work.

         Gould begins his book with what proved to be the high water mark of Redding’s career, his performance at the Monterey Pop Festival in June 1967. There, he mesmerized the mostly white audience – “footloose college students, college dropouts, teenaged runaways, and ‘flower children’” (G., p.1) – with an electrifying five-song performance, “song for song and note for note, the greatest performance of his career” (G., p.412).  The audience, which had come to hear the Jefferson Airplane, Janis Joplin and Jimi Hendrix, rewarded Redding with an uninterrupted 10 minute standing ovation.

          After Monterey, Redding developed throat problems that required surgery.  During his recuperation, he developed “Dock of the Bay.” Gould sees affinities in the song to the Beatles’ “A Day in the Life.” Otis was seeking a new form of musical identity, Gould contends, becoming more philosophical and introspective, “shedding his usual persona of self-assurance and self-assertion in order to convey the uncertainty and ambivalence of life as it is actually lived”(G. p.447).

          Redding’s premature death, Gould writes, “inspired an outpouring of publicity that far exceeded the sum of what was written about him during his life” (G., p.444). Both writers quote Jerry Wexler’s eulogy: Otis was a “natural prince . . . When you were with him he communicated love and a tremendous faith in human possibility, a promise that great and happy events were coming” (G., p.438; F., p.126). There is a tinge of envy in Fletcher’s observation that Otis’ musical reputation remained “untarnished – preserved at its peak by his early death” (F., p.126).

          Pickett’s story is quite the opposite.  Although he had a couple of mid-level hits in the 1970s, Pickett’s life entered into its own long, slow but steady demise in the years following Redding’s death.  Pickett drank excessively while becoming a regular cocaine consumer during these years. His father had struggled with alcohol, and Pickett exhibited all the signs of severe alcoholism, including heavy morning drinking. Fletcher describes painful instances of domestic violence perpetrated against each of the women with whom Pickett lived.  He was the subject of numerous civil complaints and served some jail time for domestic violence offenses.  Of course, Redding might have gone into a decline as abrupt as that of Pickett had he lived longer; his career might have plateaued and edged into mediocrity, like Pickett’s; and his personal life might have become as messy as Pickett’s.  We’ll never know.

* * *

          Pickett was far from the only star whose best songs were behind him as the 1970s dawned.  Elvis comes immediately to mind, but the same could be said of the Beatles and the Rolling Stones. Barry Gordy moved his Motown operation from Detroit to Los Angeles in 1972, where it never recaptured the spark it had enjoyed . . . in Motown.   By 1970, a harder form of rock, intertwined with the psychedelic drug culture, was in competition with that sweet soul music. The 1960s may have been a turbulent decade but the popular music trends that began in 1955 and culminated in that decade were, as Gould aptly puts it, “graced by the talents of an incomparable generation of African-American singers” (G., p.465). The  biographies of Otis Redding and Wilson Pickett take us deeply into those times and its unsurpassed music. It was fun while it lasted.

Thomas H. Peebles

Marseille, France

February 26, 2018

P.S. For an audio trip down memory lane, please click these links:





Filed under American Society, Biography, Music, United States History

Inside Both Sides of Regime Change in Iraq


John Nixon, Debriefing the President:

The Interrogation of Saddam Hussein 

          When Saddam Hussein was captured in Iraq in December 2003, it marked only the second time in the post-World War II era in which the United States had detained and questioned a deposed head of state, the first being Panama’s Manuel Noriega in 1989.  On an American base near Baghdad, CIA intelligence analyst John Nixon led the initial round of questioning of Saddam in December 2003 and January 2004.  In the first two-thirds of Debriefing the President: The Interrogation of Saddam Hussein, Nixon shares some of the insights he gained from his round of questioning  — insights about Saddam himself, his rule, and the consequences of removing him from power.

        Upon return to the United States, Nixon became a regular at meetings on Iraq at the White House and National Security Council, including several with President George W. Bush.   The book’s final third contains Nixon’s account of these meetings, which continued up to the end of the Bush administration. In this portion of the book, Nixon also reflects upon the role of CIA intelligence analysis in the formulation of foreign policy.  Nixon is one of the few individuals — maybe the only individual — who had extensive exposure both to Saddam and to those who drove the decision to remove him from power in 2003.  Nixon thus offers readers of this compact volume a formidable inside perspective on Saddam’s regime and the US mission to change it.

         But while working through Nixon’s account of his meetings with Saddam, I was puzzled by his title, “Debriefing the President,” asking myself, which president? Saddam Hussein had held the title of President of the Republic of Iraq and continued to refer to himself as president after he had been deposed, clinging tenaciously to the idea that he was still head of the Iraqi state. So does the “president” in the title refer to Saddam Hussein or George W. Bush? With the first two-thirds of the book detailing Nixon’s discussions with Saddam, I began to think that the reference was to the former Iraqi leader, which struck me as oddly respectful of a brutal tyrant and war criminal.  But this ambiguity may be Nixon’s way of highlighting one of his major objectives in writing this book.

          Nixon seeks to provide the reading public with a fuller and more nuanced portrait of Saddam Hussein than that which animated US policymakers and prevailed in the media at the time of the US intervention in Iraq, which began fifteen years ago next month.  By detailing the content of his meetings with Saddam to the extent possible – the book contains numerous passages blacked out by CIA censors — Nixon hopes to reveal the man in all his twisted complexity. He recognizes that Saddam killed hundreds of thousands of his own people, launched a fruitless war with Iran and used chemical weapons without compunction.  He “took a proud and very advanced society and ground it into dirt through his misrule” (p.12), Nixon writes, and thus deserves the sobriquet “Butcher of Baghdad.”  But while “tyrant,” “war criminal” and “Butcher of Baghdad” can be useful starting points in understanding Saddam, Nixon seems to be saying, they should not be the end point. “It is vital to know who this man was and what motivated him.  We will surely see his likes again” in the Middle East (p.9), he writes.

          When Nixon returned to the United States after his interviews with Saddam, he was surprised that none of the high-level policy makers he met with seemed interested in the question whether the United States should have removed Saddam from power.  Nixon addresses this question in his final pages with a straightforward and unsparing answer: regime change was a catastrophe for both Iraq and the United States.

* * *

           Nixon began his career as a CIA analyst at in 1998.  Working at CIA Headquarters in Virginia, he became a “leadership analyst” on Iraq, responsible for developing information on Saddam Hussein: “the family connections that helped keep him in power, his tribal ties, his motives and methods, everything that made him tick. It was like putting together a giant jigsaw puzzle with small but important pieces gleaned from clandestine reporting and electronic intercepts” (p.38).  In October 2003, roughly five months after President Bush had famously declared “mission accomplished” in Iraq, Nixon was sent from headquarters to Baghdad.  There, he helped CIA operatives and Army Special Forces target individuals for capture.  At the top of the list was HVT-1, High Value Target Number 1, Saddam Hussein.

           After Saddam was captured in December 2003 at the same farm near his hometown of Tikrit where he had taken refuge in 1959 after a bungled assassination attempt upon the Iraqi prime minister, Nixon confirmed Saddam’s identity.  US officials had assumed that Saddam would “kill himself rather than be captured, or be killed as he tried to escape. When he was captured alive, no one knew what to do” (p.76).  Nixon was surprised that the CIA became the first US agency to meet with Saddam. His team had little time to prepare or coordinate with other agencies with an interest in information from Saddam, particularly the Defense Department and the FBI.  “Everything had to be done on the fly.  We learned a lot from Saddam, but we could have learned a lot more” (p.84-85).

          Nixon’s instructions from Washington were that no coercive techniques were to be used during the meetings.  Saddam was treated, Nixon indicates, in “exemplary fashion – far better than he treated his old enemies.  He got three meals a day.  He was given a Koran and an Arabic translation of the Geneva conventions. He was allowed to pray five times each day according to his Islamic faith” (p.110).   But Nixon and his colleagues had few carrots to offer Saddam in return for his cooperation. Their position was unlike that of a prosecutor who could ask a judge for leniency in sentencing in exchange for cooperation.  Nixon told Saddam that the meetings were “his chance, once and for all, to set the record straight and tell the world who he was” (p.83).  Gradually, Nixon and his colleagues buitl a measure of rapport with Saddam, who clearly enjoyed the meetings as a break from the boredom of captivity.

          Saddam, Nixon found, had  “great charisma” and “an outsize presence. Even as a prisoner who was certain to be executed, he exuded an air of importance” (p.81-82).  He was “remarkably forthright when it suited his purposes. When he felt he was in the clear or had nothing to hide, he spoke freely. He provided interesting insights into the Ba’ath party and his early years, for example. But we spent most of our time chipping away at layers of defense meant to stymie or deceive us, particularly about areas such as his life history, human rights abuse, and WMD, to name just a few” (p.71-72).

         Saddam saw himself as the “personification of Iraq’s greatness and a symbol of its evolution into a modern state,” with a “grand idea of how he fit into Iraq’s history” (p.86).  He was “always answering questions with questions of history, and he would frequently demand to known why we had asked about a certain topic before he would give his answer” (p.100). He often feigned ignorance to test his interrogators knowledge.  He frequently began his answers “by going back to the rule of Saladin.”  Nixon   “often wondered afterward how many people told Saddam Hussein to keep it brief and lived to tell about it” (p.100).

       The meetings revealed to Nixon and his colleagues that the United States had seriously underestimated the degree to which Saddam saw himself as buffeted between his Shia opponents and their Iranian backers on one side, and Sunni extremists such as al-Quada on the other.  Saddam, himself a Sunni who became more religious in the latter stages of his life, could not hide his enmity for Shiite Iran.  He saw Iraq as the “first line of Arab defense against the Persians of Iran and as a Sunni bulwark against its overwhelmingly Shia population” (p.4).  But Saddam considered Sunni fundamentalism to be an even greater threat to his regime than Iraq’s majority Shiites or Iran.

       What made the Sunni fundamentalists, the Wahhabis, so threatening was that they “came from his own Sunni base of support. They would be difficult to root out without alienating the Iraqi tribes, and they could rely on a steady stream of financial support from Saudi Arabia. If the Wahhabists were free to spread their ideology, then his power base would rot from within” (p.124).  Saddam seemed genuinely mystified by the United States’ intervention in Iraq. He considered himself an implacable foe of Islamic extremism, and felt that the 9/11 attacks should have brought his country and the United States closer together.  Moreover, as he mentioned frequently, the United States had supported his country during the Iran-Iraq war.

          The meetings with Saddam further confirmed that in the years leading up to the United States intervention, he had begun to disengage from ruling the country.  At the time hostilities began, he had delegated much of the running of the government to subordinates and was mainly occupied with nongovernmental pursuits, including writing a novel.  Saddam in the winter of 2003 was “not a man bracing for a pulverizing military attack” (p.46), Nixon writes.  In all the sessions, Saddam “never accepted guilt for any of the crimes he was accused of committing, and he frequently responded to questions about human rights abuses by telling us to talk with the commander who had been on the scene” (p.129).

          On the eve of the 1991 Gulf War, President George H.W. Bush had likened Saddam to Hitler, and the idea took hold in the larger American public. But not once during the interviews did Saddam say he admired either Hitler or Stalin.  When Nixon asked which world leaders he most admired, Saddam said de Gaulle, Lenin, Mao and George Washington, because they were founders of political systems and thinkers.  Nixon quotes Saddam as saying, “Stalin does not interest me. He was not a thinker. For me, if a person is not a thinker, I lose interest” (p.165).

          When Nixon told Saddam that he was leaving Iraq to return to Washington, Saddam gave him a firm handshake and told Nixon to be just and fair to him back home.  Nearly three years later, in December 2006, Saddam was put to death by hanging in a “rushed execution in a dark basement” in an Iraqi Ministry (p.270), after the United States caved to Iraqi pressure and turned him over to what turned out to be little more than a Shiite lynch mob.  Nixon concludes that Saddam’s unseemly execution signaled the final collapse of the American mission in Iraq.  Saddam, Nixon writes, was:

not a likeable guy. The more you got to know him, the less you liked him. He had committed horrible crimes against humanity.  But we had come to Iraq saying that we would make things better.  We would bring democracy and the rule of law.  No longer would people be awakened by a threatening knock on the door.  And here we were, allowing Saddam to be hanged in the middle of the night (p.270).

* * *

            Nixon’s experiences with Saddam made him a familiar face at the White House and National Security Council when he returned to the United States in early 2004.  His meetings with President Bush convinced him that Bush never came close to getting a handle on the complexities of the Middle East.  After more than seven years in office, the president “still didn’t understand the region and the fallout from the invasion” (p.212). In Nixon’s view, Bush’s decision to take the country into war was largely because of the purported attempt Saddam had made on his father’s life  in the aftermath of the first Gulf War – a “misguided belief” in Nixon’s view.  The younger Bush and his entourage ordered the invasion of a country “without the slightest clue about the people they would be attacking. Even after Saddam’s capture, the White House was only looking for information that supported its decision to go to war” (p.235).

          One of the ironies of the Iraq War, Nixon contends, was that Saddam Hussein and George W. Bush were alike in many ways:

Both had haughty, imperious demeanors.  Both were fairly ignorant of the outside world and had rarely traveled abroad.  Both tended to see things as black and white, good and bad, for and against, and became uncomfortable when presented with multiple alternatives. Both surrounded themselves with compliant advisers and had little tolerance for dissent. Both prized unanimity, at least when it coalesced behind their own views. Both distrusted expert opinion (p.240).

       Nixon is almost as tough on the rest of the team that surrounded Bush and contributed to the decision to go to war, although he found Vice President Dick Chaney to be a source of caution, providing a measure of good sense to discussions.  Chaney was “professional, dignified, and considerate . . . an attentive listener” (p.197-98).  But he is sharply critical of the CIA Director at the time, George Tenet (even while refraining from mentioning the remark most frequently attributed to his former boss, that the answer to the question whether Saddam was stockpiling weapons of mass destruction was a “slam dunk”).

         In Nixon’s view, Tenet transformed the agency’s intelligence gathering function from one of neutral fact-finding, laying out the best factual assessment possible in a given situation, into an agency whose role was to serve up intelligence reports tailored to support the administration’s positions.  Tenet was “too eager to please the White House.  He encouraged analysts to punch up their reports even when the evidence was flimsy, and he surrounded himself with yes men” (p.225).  Nixon recounts how, prior to the 2003 invasion, the line level Iraq team at the CIA was given three hours to respond to a paper prepared by another agency purporting to show a connection between Saddam’s regime and the 9/11 attacks — a paper the team found “full of holes, inaccuracies, sloppy reporting and pie-in-the-sky analysis” (p.229).  Line level analysts drafted a dissenting note, but its objections were “gutted” by CIA leadership (p.230) and the faulty paper went on to serve as an important basis to justify the invasion of Iraq.

          Nixon left the agency in 2011. But in the latter portion of his book he delivers his fair share of parting shots at the post-Iraq CIA, which has become in his view a “sclerotic organization” (p.256) that “badly needs fixing” (p.261).  The agency’s leadership needs to “stop fostering the notion that the CIA is omniscient” and the broader foreign policy community needs to recognize that intelligence analysts can provide “only information and insights, and can’t serve as a crystal ball to predict the future” (p.261).  But as Nixon fires shots at his former agency, he lauds the line level CIA analysts with whom he worked. The analysts represent the “best and the brightest our country has to offer . . . The American people are well served, and their tax dollars well spent, by employing such exemplary public servants. I can actually say about these folks, ’Where do we get such people?’ and not mean it sarcastically” (p.273-74).

* * *

         Was Saddam worth removing from power, Nixon asks in his conclusion. “As of this writing, I see only negative consequences for the United States from Saddam’s overthrow” (p.257).  No serious Middle East analyst believes that Iraq was a threat to the United States, he argues.  The United States spent trillions of dollars and wasted the lives of thousands of its military men and women “only to end up with a country that is infinitely more chaotic than Saddam’s Ba’athist Iraq” (p.258).  The United States could have avoided this chaos, which has given rise to ISIS and other forms of Islamic extremism, “had it been willing to live with an aging and disengaged Saddam Hussein”(1-2).  Nixon’s conclusion, informed by his opportunity to probe the mindset of both Saddam Hussein and those who determined to remove him from power, rings true today and stings sharply.

Thomas H. Peebles

La Châtaigneraie, France

January 31, 2018






Filed under American Politics, Middle Eastern History, United States History

Pledging Allegiance to Stalin and the Soviet Union

Kati Marton, True Believer: Stalin’s Last American Spy 

 Andrew Lownie, Stalin’s Englishman: Guy Burgess, the Cold War, and The Cambridge Spy Ring 

          Spying has frequently been described as the world’s second oldest profession, and it may outrank rank the first as a subject matter that sells books. A substantial portion of the lucrative market for spy literature belongs to imaginative novelists churning out best-selling thrillers whose pages seem to turn themselves – think John Le Carré. Fortunately, there are also intrepid non-fiction writers who sift through evidence and dig deeply into the historical record to produce accounts of the realities of the second oldest profession and its practitioners, as two recently published biographies attest: Kati Marton’s True Believer: Stalin’s Last American Spy, and Andrew Lownie’s Stalin’s Englishman: Guy Burgess, the Cold War, and The Cambridge Spy Ring.

        Bearing similar titles, these works detail the lives of two men who in the tumultuous 1930s chose to spy for the Soviet Union of Joseph Stalin: American Noel Field (1904-1970) and Englishman Guy Burgess (1911-1963). Burgess, the better known of the two, was one of the infamous “Cambridge Five,” five upper class lads who, while studying at Cambridge in the 1930s, became Soviet spies. Field, less likely to be known to general readers, was a graduate of another elite institution, Harvard University. Seven years older than Burgess, he was recruited to spy for the Soviet Union at about the same time, in the mid-1930s.

           While the 1930s and the war that followed were formative periods for both young men, their stories became noteworthy in the Cold War era that followed World War II. Field spent five years in solitary confinement in post-war Budapest, from 1949 to 1954, imprisoned as a traitor to the communist cause after being used by Stalin and Hungarian authorities in a major show trial designed to root out unreliable elements among Hungary’s communist leadership and consolidate Stalin’s power over the country. His imprisonment led to the imprisonment of his wife, brother and informally adopted daughter. Burgess came to international attention in 1951 when he mysteriously fled Britain for Moscow with Donald Maclean, another of the Cambridge Five.  Burgess and Maclean’s whereabouts remained unknown and the source of much speculation until they resurfaced five years later, in 1956.

            Both men came from comfortable but not super-rich backgrounds.  Each lost his father early in life, which unmoored both. After graduating from Harvard and Cambridge with elite diplomas in hand, they even followed similar career paths. Field served in the United States State Department and was recruited during World War II by future CIA Director Allen Dulles to work for the CIA’s predecessor agency, the Office of Strategic Services (OSS), all the while providing information to the Soviet Union. Burgess served in critical periods in the British equivalents, Britain’s Foreign Office and its premier intelligence agencies, M15 and M16, while he too reported to the Soviet Union.  Field worked with refugees during the Spanish Civil War and World War II. Burgess had a critical stint during the war at the BBC.  Both men ended their lives in exile, Field in Budapest, Burgess in Moscow.

          But the two men could not have been more different in personality.  Field was an earnest American with a Quaker background, outwardly projecting rectitude and seriousness, a “sensitive, self-absorbed idealist and dreamer” (M.3), as Marton puts it. Lownie describes Burgess as “outrageous, loud, talkative, indiscreet, irreverent, overtly rebellious” (L.30), a “magnificent manipulator of people and trader in gossip” (L.324).   Burgess was also openly gay and notoriously promiscuous at a time when homosexual conduct carried serious risks.  Field, Marton argues, was never one of Stalin’s master spies. “He lacked both the steel and the polished performance skills of Kim Philby or Alger Hiss” (M.3).  Lownie claims nearly the opposite for Burgess: that he was the “most important of the Cambridge Spies” (L.x).

          Marton’s biography of Field is likely to be the more appealing of the two for general readers. It is more focused, more selective in its use of evidence and substantively tells a more compelling story, raising questions still worth pondering today. Why did Field’s quest for a life of meaning and high-minded service to mankind lead him to become an apologist for one of the 20th century’s most murderous regimes? How could his faith in that regime remain unshaken even after it imprisoned him for five years, along with his wife, brother and informally adopted daughter? There are no easy answers to these questions, but Marton raises them in a way that leads her readers to consider their implications. “True Believer” seems the perfect title for her biography, a study of the psychology of pledging and maintaining allegiance to Stalin’s Soviet Union.

         “Stalin’s Englishman,” by contrast, struck me as an overstatement for Lownie’s work. Most of the book up to Burgess’ defection to Moscow in 1951— which comes at about the book’s three-quarter mark — details his interactions in Britain with a vast array of individuals: Soviet handlers and contacts, British work colleagues, lovers, friends, and acquaintances.  Only in a final chapter does Lownie present his argument that Burgess had an enduring impact in the international espionage game and deserves to be considered the most important of the Cambridge Five.  Lownie’s biography suffers from what young people today term TMI – too much information.  He has uncovered a wealth of written documentation on Burgess and seems bent on using all of it, giving his work a gossipy flavor.  At its core, Lownie’s work is probably best understood as a study of how a flamboyant life style proved compatible with taking the pledge to Stalin and the Soviet Union.

* * *

          As a high school youth, Noel Field said he had two overriding goals in life: “to work for international peace, and to help improve the social conditions of my fellow human beings” (M.14). The introspective young Field initially saw communism and the Soviet Union as his means to implement these high-minded, humanitarian goals. But in a “quest for a life of meaning that went horribly wrong” (M.9), Field evolved into a hard-core Stalinist.  Marton frames her book’s central question as: how does an apparently good man, “who started out with noble intentions” end up sacrificing “his own and his family’s freedom, a promising career, and his country, all for a fatal myth. His is the story of the sometimes terrible consequences of blind faith” (M.1).

         Field was raised in Switzerland, where his father, a well-known, Harvard-educated biologist and outspoken New England pacifist, established a research institute. In secondary school in Zurich, Field was far more introspective and emotionally sensitive than his classmates. He had only one close friend, Herta Vieser, the “plump, blond daughter of a German civil servant” (M.12), whom he subsequently married in 1926.  Field’s father died suddenly of a heart attack at age 53, when Field was 17, shattering the peaceful, well-ordered family life the young man had known up to that time.

         Field failed to find any bearings a year later when he entered Harvard, his father’s alma mater. He knew nothing of America except what he had heard from his father, and at Harvard he was again an outsider among his privileged, callow classmates. But he graduated with full honors after only two years. In his mid-twenties, Marton writes, Field was still a “romantic, idealistic young man” who“put almost total faith in books. He had lived a sheltered, family-centered life” (M.30).

         From Harvard, Field entered the Foreign Service but worked in Washington, at the State Department’s West European Desk, where he performed brilliantly but again did not feel at home, “still in search of deeper fulfillment than any bureaucracy could offer” (M.26). In 1929, he attended an event in New York City sponsored by the Daily Worker, the newspaper of the American Communist Party.  It was a turning point for him.  The “warm, spontaneous fellowship” at the meeting made him think he had realized his childhood dream of “being part of the ‘brotherhood of man’” (M.41). Soviet agents formally recruited Field sometime in 1935, assisted by the persuasive efforts of State Department colleague and friend Alger Hiss.

          For Field, Marton writes, communism was a substitute for his Quaker faith. Like the Quakers, communists “encouraged self-sacrifice on behalf of others.” But the austere Quakers were “no match for the siren song of the Soviet myth: man and society leveled, the promise of a new day for humanity” (M.39-40).  Communism offered a tantalizing dream: “join us to build a new society, a pure, egalitarian utopia to replace the disintegrating capitalist system, a comradely embrace to replace cutthroat competition.”  In embracing communism, Field felt he could “deliver on his long-ago promise to this father to work for world peace” (M.39).

            In 1936, Field left the State Department to take a position in Geneva to work for the League of Nations’ Disarmament Section — and assist the Soviet Union. The following year, he reached another turning point when he participated in the assassination in Switzerland of a “traitor,“ Ignaz Reiss, a battle tested Eastern European Jewish Communist who envisioned exporting the revolution beyond Russia.  Reiss was appalled by the Soviet show trials and executions of 1936-38 and expressed his dismay far too openly for Stalin, making him a marked man. Others may have hatched the plot against Reiss, and still others pulled the trigger, Marton writes, “but Field was prepared to help” (M.246). He had “shown his willingness to do Moscow’s bidding – even as an accessory in a comrade’s murder. He had demonstrated his absolute loyalty to Stalin” (M.68).

            Deeply moved by the Spanish Civil War, Field became involved in efforts to assist victims and opponents of the Franco insurgency.  During the conflict, Field and his wife met a refined German doctor, Wilhelm Glaser, his wife and 17-year old daughter Erica.  A precocious, feisty teenager, Erica was the only member of her high school class who had refused to join her school’s Hitler Youth Group.  She had contracted typhoid fever when her parents met the Fields. With her parents desperate for medical attention for their daughter, the Fields volunteered to take her with them to Switzerland. In what became an informal adoption, Erica lived with Noel and Herta for the next seven years, with the rest of her life intertwined with that of Fields.  After Erica’s initial appearance in the book at about the one-third point, she becomes a central and inspiring character in Marton’s otherwise dispiriting narrative – someone who merits her own biography.

            When France fell to the Nazis in 1940, Field landed a job in Marseilles, France, with the Unitarian Service Committee (USC), a Boston-based humanitarian organization then charged with assisting the thousands of French Jews fleeing the Nazis, along with as many as 30,000 refugees from Spain, Germany, and Nazi-occupied territories of Eastern Europe.  Field’s practice was to prioritize communist refugees for assistance, including many hard-core Stalinists rejected by other relief organizations, hoping to repatriate as many as possible to their own countries “to seed the ground for an eventual postwar Communist takeover” (M.106).  It took a while for the USC to pick up on how Field had transformed it from a humanitarian relief organization into what Marton terms a “Red Aid organization” (M.131).

         After the Germans occupied the rest of France in November 1942, the Fields escaped from Marseilles to Geneva, where they continued to assist refugees and provide special attention to communists whom Noel considered potential leaders in Eastern Europe after the war.  While in Geneva, Field attracted the attention of Allen Dulles, an old family friend from Zurich in the World War I era who had also crossed paths with Field at the State Department in Washington.  Dulles, then head of OSS, wanted Field to use his extensive communist connections to infiltrate Nazi-occupied countries of Eastern Europe. With Field acting as a go-between, the OSS provided communists from Field’s network with financial and logistical support both during and after the war.

        But Field failed to understand that his network was composed largely of communists who had fallen into Stalin’s disfavor. Stalin considered them unreliable, with allegiances that might prioritize their home countries – Poland, East Germany, Hungary or Czechoslovakia – rather than the Soviet Union.  Although Stalin tightened the Soviet grip on these countries in the early Cold War years, he failed to bring Yugoslavia’s independent-minded leader, Marshal Josip Tito, into line.  To make sure that no other communist leaders entertained ideas of independence from the Soviet Union, Stalin targeted a host of Eastern European communists as “Titoists,” which became the highest crime in Stalin’s world — much like being a “Trotskyite” in the 1930s.   Stalin chose Budapest as the place for new round of show trials, analogous to those of 1936-38.

            Back in the United States, in Congressional testimony in 1948, Whittaker Chambers named Field’s long-time friend Alger Hiss as a member of an underground communist cell based in Washington. Hiss categorically denied the allegation and mounted an aggressive counterattack, including a libel suit against Chambers. In the course of defending the suit, Chambers named Field as another communist who had worked at a high level in the State Department.  Field’s double life ended in the aftermath of Chambers’ revelations. He could no longer return to the United States.

            Field’s outing occurred when he was in Prague, seeking a university position after his relief work had ended. From Prague, he was kidnapped and taken to Budapest, where he was interrogated and tortured over his association with Allen Dulles and the CIA.  Like so many loyal communists in the 1930s show trials, Field “confessed” that his rescue of communists during the war was a cover for recruiting for Dulles and the arch-traitor, Tito.   He provided his interrogators with a list of 562 communists he had helped return to Poland, East Germany, Czechoslovakia, and Hungary.  All, Marton writes, “paid with their lives, their freedom, or – the lucky ones — merely their livelihood, for the crime of being ‘Fieldists’” (M.157).  At one point, authorities confronted Field with a man he had never met, a Hungarian national who had previously been a leader within Hungarian communist circles, and ordered Field to accuse the man of being his agent.  Field did so, and the man was later sentenced to death and hanged.

          Hungarian authorities used Field’s “confession” as the centerpiece in a massive 1949 show trial of seven Hungarian communists, including Laslo Rajk, a lifelong communist and top party theoretician who had been Hungary’s Interior Minister and later its Foreign Minister.  All were accused of being “Fieldists,” who had attempted to overthrow the “peoples’ democracy” on behalf of Allen Dulles, the CIA, and Tito.  Field was not tried, nor did he appear as a witness in the trials.  All defendants admitted that Field had spurred them on; all were subsequently executed. By coincidence, Marton’s parents, themselves dissident Hungarian journalists, covered the trial.

           Field was kept in solitary confinement until released in 1954, the year after Stalin’s death. Marton excoriates Field for a public statement he made after his release. “We are not among those,” he declared, “who blame an entire people, a system or a government for the misdeeds of a handful of the overzealous and the misguided,’’ adding her own emphasis to Field’s statement. Field, she writes, thereby exonerated “one of history’s most cruel human experiments, blaming the jailing and slaughter of hundreds of thousands of innocents on a few excessively fervent bad apples” (M.194).

         Field’s wife Herta traveled to Czechoslovakia in the hope of getting information from Czech authorities on her missing husband’s whereabouts. Those authorities handed her over to their Hungarian counterparts, who placed her in solitary confinement in the same jail as her husband, although neither was aware of the other’s presence during her nearly five years of confinement.   When Field’s younger brother Hermann went looking for Field, he was arrested in Warsaw, where he had worked just prior to the outbreak of the war, assisting endangered refugees to immigrate to Great Britain. Herta and Hermann were also released in 1954. Hermann returned to the United States and published a short work about the experience, Trapped in the Cold War: The Ordeal of an American Family.

           Erica Glaser, Field’s unofficially adopted daughter, like Herta and Hermann, went searching for Noel and she too ended up in jail as a result.  Erica had moved to the American zone of occupied Germany after the war, working for the OSS. But she left that job to work for the Communist Party in the Hesse Regional Parliament. There, she met and fell in love with U.S. Army Captain Robert Wallach.  When her party superiors objected to the relationship, Erica broke her connections with the party and the couple moved to Paris. They married in 1948.

          In 1950, Erica decided to search for both Noel and Herta. Using her own Communist Party contacts, Erica was lured to East Berlin, where she was arrested. She was condemned to death by a Soviet military court in Berlin and sent to Moscow’s infamous Lubyanka prison for execution. After Stalin’s death, her death sentence was commuted, but she was shipped to Siberia, where she endured further imprisonment in a Soviet gulag (Marton’s description of Erica’s time in the Gulag reads like Caroline Moorhead’s account of several courageous French women who survived Nazi prison camps in World War II, A Train in Winter, one of the first books reviewed here in 2012).

       Erica was released in October 1955 under an amnesty declared by Nikita Khrushchev, but was unable to join her husband in the United States because of State Department concern over her previous Communist Party affiliations.  Allen Dulles intervened on her behalf to reunite her with her family in 1957.  She finally reached the United States, where she lived with her husband and their children in Virginia’s horse country, an ironic landing point for the fiery former communist.  Erica wrote a book based on her experiences in Europe, Light at Midnight, published in 1967, a clever inversion of Arthur Koestler’s Darkness at Noon.  She lived happily and comfortably in Virginia up to her death in 1993.

            Field spent his remaining years in Hungary after his release in 1954.  He fully supported the Soviet Union’s intervention in the 1956 Hungarian uprising. He stopped paying dues to the Hungarian Communist Party after the Soviets put an end to the “Prague Spring” in 1968, but Marton indicates that there is no evidence that the two events were related.  Field “never criticized the system he served, never showed regret for his role in abetting a murderous dictatorship,” Marton concludes. “At the end, Noel Field was still a willing prisoner of an ideology that had captured him when his youthful ardor ran highest” (M.249).  Field died in Budapest in 1970. His wife Herta died ten years later, in 1980.

* * *

            Much like Noel Field, Guy Burgess, “never felt he belonged. He was an outsider” (L.332), Lownie writes.  But Burgess’ motivation for entry into the world’s second oldest profession was far removed from that of the high-minded Field: “Espionage was simply another instrument in his social revolt, another gesture of self-assertion . . . Guy Burgess sought power and realizing he was unable to achieve that overtly, he chose to do so covertly. He enjoyed intrigue and secrets for they were his currency in exerting power and controlling people” (L.332).

         Burgess’ father and grandfather were military men. His father, an officer in the Royal Navy, was frequently away during Burgess’s earliest years, and the boy depended upon his mother for emotional support and guidance. His father died suddenly of a heart attack when Guy was 13, bringing him still closer to his mother.  Burgess attended Eton College, Britain’s most prestigious “public school,” i.e., upper class boarding school, and from there earned a scholarship to study history at Trinity College, Cambridge. When Burgess arrived in 1930, left-wing radicalism dominated Cambridge.

         Burgess entered Cambridge considering himself a socialist and it was an easy step from there to communism, which appeared to many undergraduates as “attractive and simple, a combination of the best of Christianity and liberal politics” (L.41). Fellow undergraduates Kim Philby and Donald Maclean, whom Burgess met early in his tenure at Cambridge, helped move him toward communism.  Both were recruited to work as agents for the Soviet Union while at Cambridge, and Burgess followed suit in late 1934.  Burgess’ contacts within Britain’s homosexual circles made him an attractive recruit for Soviet intelligence services.

        Before defecting to Moscow, Burgess worked  first as a producer and publicist at the BBC (for a while, alongside fellow Etonian George Orwell), followed by stints as an intelligence officer within both M15 and M16.  He joined the Foreign Office in 1944.  While with the Foreign Office, he was posted to the British Embassy in Washington, where he worked for about nine months.  Philby was his immediate boss in Washington and Burgess lived for a while with Philby’s family. In these positions, Burgess drew attention for his eccentric habits, e.g., constantly chewing garlic; for his slovenly appearance, especially dirty fingernails; and for incessant drinking and smoking — at one point, he was smoking a mind-boggling 60 cigarettes per day.  A Foreign Office colleague’s description was typical: Burgess was a “disagreeable character,” who “presented an unkempt, distinctly unclear appearance . . . his fingernails were always black with dirt. His conversation was no less grimy, laced with obscene jokes and profane language” (L.183). Burgess’ virtues were that he was witty and erudite, often a charming conversationalist, but with a tendency to name-drop and overstate his proximity to powerful government figures.

            Working at the highest levels within Britain’s media, intelligence and foreign policy communities, Burgess frequently seemed on the edge of being dismissed for unprofessional conduct, well before suspicions of his loyalty began to surface.  How Burgess could have remained in these high level positions despite his eccentricities remains somewhat of a mystery.  One answer is that his untethered, indiscreet life-style served as a sort of cover: no one living like that could possibly be a double agent. As one colleague remarked, if he was really working for the Soviets, “surely he wouldn’t act the part of a parlor communist so obviously – with all that communist talk and those filthy clothes and filthy fingernails” (L.167).   Another answer is that he was a member of Britain’s old boy network, at the very top of the English class system, where there was an ingrained tendency not to be too probing or overly judgmental of one’s social peers.  Ben McIntyre emphasizes this point throughout his biography on Philby, reviewed here in June 2016, and Lownie alludes to it in explaining Burgess.

          The book’s real drama starts with Burgess’ sudden defection from Britain to the Soviet Union in 1951 with Donald Maclean, at a time when British authorities had finally caught onto Maclean — but before official doubts about Burgess had crystallized.  Burgess’s Soviet handler told Burgess, who had recently been sent home from the Embassy in Washington after he threatened a Virginia State Trooper who had stopped him for a speeding violation, that he needed to “exfiltrate” Maclean – get him out of Britain.  By leaving himself, Burgess surprised and angered his former boss Philby, who was charged with the British investigation into Maclean’s activities.  Burgess’ defection turned the focus on Philby, who defected himself a decade later.

          The route out of Britain that Maclean and Burgess took remains unclear, as are Burgess’s reasons for accompanying Maclean to the Soviet Union.   The official line was that the departure was nothing more than a “drunken spree by two low-level diplomats,” but the popular press saw the disappearance of the two as a “useful tool to beat the government” (L.264), while of course increasing circulation.  Sometime after his defection, British authorities awoke to the realization that the eccentric Burgess may have been more than just a smooth-talking, chain-smoking drunk.  But they were never able to assemble a solid case against him and did not believe that there would be sufficient evidence to prosecute him should he return to Britain.  In fact, he never did and the issue never had to be faced.

         The two men’s whereabouts remained an international mystery until 1956, when the Soviets staged an outing for a Sunday Times correspondent at a Moscow hotel.  Burgess and Mclean issued a written statement for the correspondent indicating that they had come to Moscow to work for better understanding between the Soviet Union and the West, convinced as they were that neither Britain nor the United States was seriously interested in better relations.   Burgess spent his remaining years in Moscow, where he was lonely and isolated.

        Burgess read voraciously, listened to music, and pursued his promiscuous lifestyle in Moscow, a place where homosexuality was a criminal offense less likely to be overlooked than in Britain.  Burgess clearly missed his former circle of friends in England.  During this period, he took to saying that although he remained a loyal communist, he would prefer to live among British communists. “I don’t like the Russian communists . . . I’m a foreigner here. They don’t understand me on so many matters” (L.315).  Stalin’s Englishman outlasted Stalin by a decade.  Burgess died in Moscow in 1963, at age 52, an adult lifetime of unhealthy living finally catching up with him. He was buried in a Moscow cemetery, the first of the Cambridge Five to go to the grave.

             Throughout the book’s main chapters, Burgess’ impact as a spy gets lost among the descriptions of his excessive smoking, drinking and homosexual trysts.  Burgess passed many documents to the Soviets, Lownie indicates.  Most revealed official British thinking at key points in British-Soviet relations, among them, documents involving the 1938 crisis with Hitler over Czechoslovakia; 1943-45 negotiations with the Soviets over the future of Poland; the Berlin blockade of 1948; and the outbreak of war on the Korean peninsula in 1950.  But there does not seem to be anything comparable to Philby’s cold-blooded revelations of anti-Soviet operations and operatives, leading directly to many deaths; or, for that matter, comparable to Field’s complicity in the Reiss assassination or his denunciation of Hungarian communists.

          In a final chapter, entitled “Summing Up” – which might have been better titled “Why Burgess Matters” – Lownie acknowledges that it is unclear how valuable were the many documents were which Burgess passed to the Soviets:

[E]ven when we know what documentation was taken, we don’t know who saw it, when, and what they did with the material. The irony is that the more explosive the material, the less likely it was to be trusted, as Stalin ad his cohorts couldn’t believe that it wasn’t a plant. Also if it didn’t fit in with Soviet assumptions, then it was ignored (L. 323-24).

          One of Burgess’ most damaging legacies, Lownie argues, was the defection itself, which “undermined Anglo-American intelligence co-operation at least until 1955, and public respect for the institutions of government, including Parliament and the Foreign Office. It also bequeathed a culture of suspicion and mistrust within the Security Services that was still being played out half a century after the 1951 flight” (p.325-26).  Burgess may have been the “most important of the Cambridge spies,” as Lownie claims at the outset, but I was not convinced that the claim was proven in his book.

* * *

            Noel Field and Guy Burgess, highly intelligent and well educated men, were entirely different in character and motivation.  That both chose to live duplicitous lives as practitioners of the world’s second oldest profession is a telling indication of the mesmerizing power which Joseph Stalin and his murderous ideology exerted over the best and brightest of the generation which came of age in the 1930s.

Thomas H. Peebles

La Châtaigneraie, France

December 25, 2017


Filed under British History, Eastern Europe, European History, German History, History, Russian History, Soviet Union, United States History

Using Space to Achieve Control

Mitchell Duneier, Ghetto:

The Invention of a Place, the History of an Idea 

            In 1516, Venice’s ruling authorities, concerned about an influx into the city of Jews who had been expelled from Spain in 1492, created an official Jewish quarter. They termed the quarter “ghetto” because it was situated on a Venetian island that was known for its copper foundry, geto in Venetian dialect. In 1555, Pope Paul IV forced Rome’s Jews into a similarly enclosed section of the city also referred to as the “ghetto.” Gradually, the term began to be applied to distinctly Jewish residential areas across Europe. After World War II in the United States, the term took on a life of its own, applied to African-American communities in cities in the urban North. In Ghetto: The Invention of a Place, the History of an Idea, Mitchell Duneier, professor of sociology at Princeton University, examines the origins and usages of the word “ghetto.”

            The major portion of Duneir’s book explores how the word influenced selected post World War II thinkers in their analyses of discrimination against African-Americans in urban America. While there were a few instances pre-dating World War II of the use of the term ghetto to describe African-American neighborhoods in the United States, it was Nazi treatment of Jews in Europe that gave impetus to this use of the term. By the 1960s, the use of the word ghetto to refer African-American neighborhoods had become commonplace. Today, Duneier writes, the idea of the black ghetto in the United States is “synonymous in the social sciences and public policy discussions with such phrases as ‘segregated housing patterns’ and ‘residential racial segregation’” (p.220).

          Duneier wants us to understand the urban ghetto in the United States as a “space for the intrusive social control of poor blacks” (p.xii).  It is not the result of a natural sorting or Darwinian selection; it is not an illustration of the adage that “birds of a feather flock together.” He discourages any attempt to apply the term to, for example, poor whites, Hispanics or Chinese. The notion of a ghetto, he argues, becomes a “less meaningful concept if it is watered down to simply designate a black neighborhood that varies in degree (but not in kind) from white and ethnic neighborhoods of the city.  .  . Extending the definition to other minority groups . . . carries the cost of obscuring the specific mechanisms by which the white majority has historically used space to achieve power over blacks” (p.224). Duneier shows how, in the decades since World War II, theorists have emphasized different types of external controls over African-American communities: restrictive racial covenants in real estate contracts in the 1940s, precluding the sale of properties to African-Americans; the rise of educational and social welfare bureaucracies in the 1950s, 1960s and 1970s; and, more recently, police and prison control of African-American males resulting from the war on drugs.  But Duneier’s  story, tracing the idea of a ghetto, starts in Europe.

* * *

            In medieval times, Jews in French, English and German speaking lands lived in “semi-voluntary Jewish quarters for reasons of safety as well as communal activity and self-help” (p.4). But Jewish quarters were “almost never obligatory or enclosed until the fifteenth century” (p.5). Jews were always free to come and go and were, in varying degrees, part of the community-at-large. This changed with the expulsion of Jews from Spain in 1492, with many migrating to Italy.  Following the designation of the ghetto in Venice in 1516, Pope Paul IV gave impetus to the rise of separate and unequal Jewish quarters when he issued an infamous Papal Bull in 1555, “Cum nimis absurdum.” In that instrument, the Pope mandated that all Rome’s Jews should live “solely in one and the same place, or if that is not possible, in two or three [places] or as many as are necessary, which are to be contiguous and separated completely from the dwellings of Christians” (p,8).  After centuries of identifying themselves as Romans and enjoying relative freedom of movement, Duneier writes, suddenly Rome’s Jews were forcibly relocated to a small strip of land near the Tiber, “packed into a few dark and narrow streets that were regularly inundated by the flooding river” (p.8).

          This pattern prevailed across Europe during the 17th and 18th centuries, with Jews living in predominantly Jewish quarters in most major cities, some semi-voluntary, others mandatory.  “Isolation in space” (p.7) became part of what it meant to be Jewish.  Napoleon’s war of conquest in Italy in 1797 led to the liberation of Jewish ghettos in Venice, Rome and across the Italian peninsula.  In the 19th century, ghettos began to disappear across Europe. Yet, Rome remained stubbornly resistant. When Napoleon retreated from Italy in 1814, Pope Pius VI almost immediately reinstated the Roman ghetto, sending the Jews back into the “same dank and overcrowded quarter that they had occupied for centuries” (p.12). A product of papal authority, the Roman ghetto was formally and officially abolished with Italian unification in 1870. Thus, Rome’s Jews, among the first in Europe to be confined to a ghetto, “became the last Jews in Western Europe to win the rights of citizenship in their own country” (p.12).

          Duneier perceives a benign aspect to confinement of Jews in traditional ghettos.  The ghetto was a “comfort zone” for often-thriving Jewish communities, a designated area where Jews were required to live but could exercise their faith freely, in a section of the city where they would not face opprobrium from fellow citizens. Jewish communities possessed “internal autonomy and maintained a wide range of religious, educational, and social institutions” (p.11). In Venice and throughout Europe, the ghetto represented a “compromise that legitimized but carefully controlled [Jewish] presence in the city” (p.7). The traditional European ghetto was thus “always a mixed bag. Separation, while creating disadvantages for the Jews, also created conditions in which their institutional life could continue and even blossom” (p.10).

      In the early 20th century, the word ghetto came to refer to high-density neighborhoods inhabited predominantly but voluntarily by Jews. In the United States, the word frequently denoted neighborhoods inhabited not by African-Americans but by Jewish immigrants from Eastern Europe. Then, when the Nazis came to power in Germany in 1933, they gave ominous new meanings to the word ghetto. Privately, Hitler used the word to compare areas of enforced Jewish habitation to zoos, enclosed areas where, as he put it, Jews could “behave as becomes their nature, while the German people look on as one looks at wild animals” (p.14). Publicly, and more politely, Hitler argued that confined Jewish quarters under the Nazis simply replicated the Catholic Church’s treatment of Jews in 19th century Rome.

          But ghettos controlled by the Nazis were more frequently like prisons, surrounded by barbed-wire walls. The Nazi ghetto was a place established with the “express purpose of destroying its inhabitants through violence and brutality” (p.22), a place where the state exercised the “firmest control over its subjects’ lives” (p.220). The Nazis’ virulent anti-Semitism, Duneier concludes, “transformed the ghetto into a means to accomplish economic enslavement, impoverishment, violence, fear, isolation, and overcrowding in the name of racial purity — all with no escape through conversion, and with unprecedented efficiency” (p.22).

* * *

       The fight in World War II against Hitler and Nazi tyranny, in which African-Americans participated in large numbers, understandably had the effect of highlighting the pervasive discrimination that African-Americans faced in the United States. The modern Civil Rights movement came into being in the years following World War II, focused primarily on the Southern United States and its distinctive system of rigid racial separation know as “Jim Crow.” A less visible battle took place in Northern cities, where attention focused on discrimination in employment, education and housing. A small group of sociologists, centered at the University of Chicago, emphasized how African-Americans in nearly all cities in the urban North were confined to distinct neighborhoods characterized by sub-standard housing, neighborhoods that came to be referred to as ghettos.

       Framing the debate in post-war America was the work of Gunnar Myrdal, the Swedish economist who wrote what is now considered the classic analysis of discrimination in the United States, “An American Dilemma.”  Myrdal’s work, based on research conducted during World War II and published in 1944, took on a high level of importance in post-war America.  Myrdal’s research focused principally on the Jim Crow South, where three-fourths of America’s black population then lived.  But Myrdal also advanced what may seem in retrospect like a naïve if idealistic view of Northern racial segregation: it was due primarily to the practice of inserting restrictive covenants into real estate sales contracts, forbidding the selling of property to minorities (along with African-Americans, Jews and Chinese were other groups often excluded by such covenants). Restrictive covenants directed against African-Americans had a component of racial purity that was uncomfortably similar to Nazi practices, most frequently excluding persons with a single great-grandparent of black ancestry. Such clauses, Myrdal argued, were contrary to the basic American creed of equality.  Once white citizens were made aware of the contradiction, they would cease the practice of inserting such restrictions into real estate contracts, and housing patterns would desegregate.

        Myrdal himself rarely used the term ghetto and his treatment of the urban North was “perfunctory by any standard” (p.58). His main contribution was to view Northern segregation not as a natural occurrence, but as a “phenomenon of the majority’s power over a minority population” (p.63). Myrdal’s notion of majority white control over African-American communities influenced the views of two younger African-American sociologists from the University of Chicago, Horace Cayton and St. Clair Drake. In 1945, Cayton and Drake published Black Metropolis, a work that focused on discrimination in Chicago and the urban North but failed to gain the attention that Myrdal’s work had received. Duneier indicates that Myrdal’s analysis of the urban North suffered because he was unable to work out an arrangement with Cayton to use the younger scholar’s copious notes of interviews and firsthand observations of conditions in Chicago’s African-American communities.  

         Cayton and Drake sought to “systematically explain the situation of blacks who had recently moved from the rural South to the urban North” (p.233). They were among the first to use the word ghetto frequently as a description of African-American communities in the North.  The word was for them a “metaphor for both segregation and Caucasian purity in the Nazi era” (p.71-72): blacks who sought to leave, they wrote, encountered the “invisible barbed wire fence of restrictive covenants” (p.72; Duneier’s emphasis). Cayton and Drake considered black confinement to black neighborhoods as permanent and officially sanctioned, unlike Hispanic or Chinese neighborhoods, giving African-American neighborhoods their ghetto-like quality.  For Cayton and Drake, therefore, ghetto was a term used to highlight the differences between African-American communities and other poor neighborhoods throughout the city.

         Echoing the interpretations of traditional European Jewish ghettos discussed above, Cayton and Drake emphasized the “more pleasant aspects of black life that were symbolic of an authentic black identity” (p.69). They argued that racial separation had created a refuge for blacks in a racist world and that blacks had no particular interest in mingling with white people, “having accommodated themselves over time to a dense and separate institutional life – ‘an intricate web of families, cliques, churches, and voluntary associations, ordered by a system of social classes’ – in their own black communities. This life so absorbed them as to make participation in interracial activities feel superfluous” (p.69). Today, Black Metropolis remains a “major inspiration for efforts to understand racial inequality, due to its focus on Northern racism, physical space, and the consequences of racial segregation” (p.79).

            Another protégé of Myrdal, renowned Columbia University sociologist Kenneth Clark, emphasized in the 1950s and 1960s the extent to which external controls of black neighborhoods – absentee landlords and business owners, and school, welfare and public housing bureaucracies – produced a “powerless colony” (p.91). Clark’s 1965 work, Dark Ghetto, which Duneier considers the most important work on the African-American condition in the urban North since Cayton and Drake’s Black Metropolis two decades earlier, argued that the black ghetto was a product of the larger society’s successful “institutionalization of powerlessness” (p.114). Clark looked at segregated residential patterns as just one of several interlocking factors that together produced in ghetto residents a sense of helplessness and suspicion. Others included discrimination in the work place and unequal educational opportunities. Clark thus saw urban ghettos as reenforcing  “vicious cycles occurring within a powerless social, economic, political, and educational landscape” (p.137). Together, theses cycles led to what Clark termed a “tangle of pathologies.”

       For Clark, the traditional Jewish European ghetto bore little resemblance to American realities. Rigid housing segregation was “more meaningfully a North American invention, a manner of existence that had little in common with anything that had come before in Europe or even in the U.S. South” (p.114).  More than any other thinker in Duneier’s study, Clark provided the term ghetto with a distinctly American meaning.  In the 1980s and 1990s, African-American sociologist William Julius Wilson rethought much of the received wisdom that had come from or through Myrdal and Clark.

            Wilson took into account the out-migration of African Americans from inner cities that had begun to gather momentum in the 1970s.  In understanding the plight of those left behind, Wilson argued that class had become a more significant factor than race. African-Americans were dividing into two major classes: a middle class, a “black bourgeoisie,” more and more often living outside the urban core – outside the ghetto – in outlying areas of the city, or in the suburbs, not uncommonly in mixed black-white neighborhoods. The black ghetto remained, concentrating and isolating the least skilled, least educated and least fortunate African-Americans, a “black underclass.”  In contrast to the African-American communities Cayton and Drake had described in the 1940s, those left behind in the 1970s and 1980s saw far fewer black role models they could emulate. A new form of American ghetto had emerged by 1980, Wilson argued, “characterized by geographic, social, and economic isolation. Unlike in previous eras, middle-class and lower-class blacks were having very different life experiences in the 1980s” (p.234).

        Wilson further posited that any neighborhood with 40 percent poverty should be termed a ghetto, thereby blurring the distinction between poor black and poor white or Hispanic neighborhoods.  Assistance program that target poor communities generally, Wilson theorized, were more likely to be approved and implemented than programs targeting only African-American communities.  For the first time since the term ghetto had become part of the analysis of Northern housing patterns in the early post-World War II era, the term was now used without reference to either race or power. With Wilson’s analysis, Duneier contends, the history of the idea of a ghetto in Europe and America “no longer seemed relevant” (p.184).

         Duneier devotes a full chapter to Geoffrey Canada, a charismatic community activist rather than a theorist and scholar. In the 1990s and early 21st century, Canada came to see early education as the key to improving the quality of life in African American neighborhoods – in black ghettos – thereby increasing the range of work and living opportunities for African American youth.  Canada was one of the first to characterize the federal crackdown on drug crime as a tragic mistake, producing alarming rates of black incarceration.  As a result, the country was spending “far more money on prisons than on education” (p.198).

        Two white theorists,  Patrick Moynihan and Oscar Lewis, also figure in Duneier’s analysis. Moynihan, an advisor to presidents Kennedy, Johnson and Nixon, and later Senator from New York, described the black ghetto in terms of broken families. The relatively large number of illegitimate births and a matriarchal family structure in African-American communities, Moynihan argued, held back both black men and women.  Lewis, an anthropologist from the University of Illinois, advanced the notion of a “culture of poverty,” contending that poverty produces a distinct and debilitating mindset that was remarkably similar throughout the world, in advanced and developing countries, in urban and rural areas alike.

         In a final chapter, Duneier summarizes where his key thinkers have led us in our current conception of the term ghetto in the United States. “By the 1960s, an uplifting portrait of the black ghetto became harder to draw. Ever since, those left behind in the black ghetto have had a qualitatively different existence” (p.219). The word now signifies “restriction and impoverishment in a delimited residential space. This emphasis highlights the important point that today’s residential patterns did not come about ‘naturally’; they were promoted by both private and state actions that were often discriminatory and even coercive” (p.220).

* * *

         Duneier has synthesized some of the most important sociological thinking of the post World War II era on discrimination against African Americans, producing a fascinating, useful and timely work.  But Duneier does not spoon feed. The basis for his hypothesis that the links between the traditional Jewish European ghetto and the black American ghetto have gradually faded is not readily gleaned from the text. Similarly, how theorists used the term ghetto in their analyzes of racial discrimination against African Americans seems at times a minor subtheme, overwhelmed by his treatment of the analyzes themselves.  Duneier’s important work thus requires – and merits — a careful reading.

          Thomas H. Peebles

Washington, D. C.

July 27, 2017




Filed under American Society, European History, United States History

Trial By History




Lawrence Douglas, The Right Wrong Man:

John Demjanjuk and the Last Great Nazi War Crimes Trial 

          Among the cases seeking to bring to justice Nazi war criminals and those who abetted their criminality, that of Ivan Demjanjuk was far and away the most protracted, and perhaps the most confounding as well.  From 1976, up to his death in 2012, a few months short of his 92nd birthday, Demjanjuk was the subject of investigations and legal proceedings, including two lengthy trials, involving his wartime activities after becoming a Nazi prisoner of war. Born in the Ukraine in 1920, Demjanjuk was conscripted into the Red Army in 1941, injured in battle, and taken prisoner by the Nazis in 1942. After the war, he immigrated to the United States, where he settled in Cleveland and worked in a Ford automobile plant, changing his name to John and becoming a US citizen in 1958.

        Demjanjuk’s unexceptional and unobjectionable American immigrant life was disrupted in 1976 when several survivors of the infamous Treblinka prison camp in Eastern Poland identified him as Ivan the Terrible, a notoriously brutal Ukrainian prison guard at Treblinka. In a trial in Jerusalem that began in 1987, an Israeli court found that Demjanjuk was in fact Treblinka’s Ivan and sentenced him to death. But the trial, which began as the most significant Israeli prosecution of a Nazi war criminal since that of Adolph Eichmann in 1961, finished as one of modern history’s most notorious cases of misidentification. In 1993, the Israeli Supreme Court found, based on newly discovered evidence, that Demjanjuk had not been at Treblinka. Rather, the new evidence established that Demjanjuk had served at four other Nazi camps, including 5½ months in 1943 as a prison guard at Sobibor, a camp in Poland, at a time when nearly 30,000 Jews were killed.  In 2009, Demjanjuk went on trial in Munich for crimes committed at Sobibor. The Munich trial court found Demjanjuk guilty in 2011. With an appeal of the trial court’s verdict pending, Demjanjuk died ten months later, in 2012.

        The driving force behind both trials was the Office of Special Investigations (“OSI”), a unit within the Criminal Division of the United States Department of Justice. OSI initiated denaturalization and deportation proceedings (“D & D”) against naturalized Americans suspected of Nazi atrocities, usually on the basis of having provided misleading or incomplete information for entry into the United States (denaturalization and deportation are separate procedures in the United States, before different tribunals and with different legal standards; because no legislation criminalized Nazi atrocities committed during World War II, the ex post facto clause of the U.S. Constitution was considered a bar to post-war prosecutions of such acts in the United States). OSI had just come into existence when it initiated the D & D proceedings against Demjanjuk in 1981 that led to his trial in Israel, and its institutional inexperience contributed to the Israeli court’s misidentification of Demjanjuk as Ivan the Terrible. Twenty years later, in 2001, OSI initiated a second round of D & D proceedings against Demjanjuk for crimes committed at Sobibor.  By this time, OSI had added a handful of professional historians to its staff of lawyers (during my career at the US Department of Justice, I had the opportunity to work with several OSI lawyers and historians).

             In his thought-provoking work, The Right Wrong Man: John Demjanjuk and the Last Great Nazi War Crimes Trial, Lawrence Douglas, a professor of Law, Jurisprudence and Social Thought at Amherst College, aims to sort out and make sense of Demjanjuk’s 35-year legal odyssey, United States-Israel-United States-Germany.  Douglas argues that the expertise of OSI historians was the key to the successful 2011 verdict in Munich, and that the Munich proceedings marked a critical transformation within the German legal system. Although 21st century Germany was otherwise a model of responsible atonement for the still unfathomable crimes committed in the Nazi era, its hidebound legal system had up to that point amassed what Douglas terms a “pitifully thin record” (p.11) in bringing Nazi perpetrators to the bar of justice.  But through a “trial by history,” in which the evidence came from “dusty archives rather than the lived memory of survivors” (p.194), the Munich proceedings demonstrated that German courts could self-correct and learn from past missteps.

         The trial in Munich comprises roughly the second half of Douglas’ book. Douglas traveled to Munich to observe the proceedings, and he provides interesting and valuable sketches of the judges, prosecutors and defense attorneys, along with detail about how German criminal law and procedure adapted to meet the challenges in Demjanjuk’s case.  The man on trial in Munich was a minor cog in the wheel of the Nazi war machine, in many ways the polar opposite of Eichmann. No evidence presented in Munich tied Demjanjuk to specific killings during his service at Sobibor. No evidence demonstrated that Demjanjuk, unlike Ivan the Terrible at Treblinka, had engaged in cruel acts during his Sobibor service. There was not even any evidence that Demjanjuk was a Nazi sympathizer. Yet, based on historical evidence, the Munich court concluded that Demjanjuk had served as an accessory to murder at Sobibor.  The camp’s only purpose was extermination of its population, and its guards contributed to that that purpose. As Douglas emphatically asserts, all Sobibor guards necessarily served as accessories to murder because “that was their job” (p.220).

* * *

            Created in 1979, OSI “represented a critical step toward mastering the legal problems posed by the Nazi next door” (p.10; a reference to Eric Lichtblau’s incisive The Nazi Next Door, reviewed here in October 2015).   But OSI commenced proceedings to denaturalize Demjanjuk before it was sufficiently equipped to handle the task.  In 1993, after Demjanjuk’s acquittal in Jerusalem as Ivan the Terrible, the United States Court of Appeals for the Sixth Circuit severely reproached OSI for its handling of the proceedings that led to Demjanjuk’s extradition to Israel.  The court found that OSI had withheld exculpatory identification evidence, with one judge suggesting that in seeking to extradite Demjanjuk OSI had succumbed to pressure from Jewish advocacy groups .

            The Sixth Circuit’s ruling was several years in the future when Demjanjuk’s trial began in Jerusalem in February 1987, more than a quarter of a century after completion of the Eichmann trial (the Jerusalem proceeding against Eichmann was the subject of Deborah Lipstatadt’s engrossing analysis, The Eichmann Trial, reviewed here in October 2013). The Holocaust survivors who testified at the Eichmann trial had had little or no direct dealing with the defendant. Their purpose was didactic: providing a comprehensive narrative history of the Holocaust from the survivors’ perspective.   The Treblinka survivors who testified at Demjanjuk’s trial a quarter century later older had a more conventional purpose: identification of a perpetrator of criminal acts.

            Five witnesses, including four Treblinka survivors and a guard at the camp, identified Demjanjuk as Ivan the Terrible.   Eliahu Rosenberg, who had previously testified at the Eichmann trial, provided a moment of high drama when he approached Demjanjuk, asked him to remove his glasses, looked him in the eyes and declared in Yiddish, the language of the lost communities in Poland, “This is Ivan. I say unhesitatingly and without the slightest doubt. This is Ivan from the [Treblinka] gas chambers. . . I saw his eyes. I saw those murderous eyes” (p.51). The Israeli court also allowed the Treblinka survivors to describe their encounters with Ivan the Terrible as part of a “larger narrative of surviving Treblinka and the Holocaust” (p.81). The court seemed influenced by the legacy of the Eichmann trial; it acted, Douglas emphasizes, “as if the freedom to tell their story was owed to the survivors” (p.81-82).

            The case against Demjanjuk also rested upon an identification card issued at Trawniki, an SS facility in Poland which prepared specially recruited Soviet POWs to work as accessories, where they provided the SS with “crucial assistance in the extermination of Poland’s Jews, including serving as death camp guards” (p.52). The card contained a photo that unmistakably was of the youthful Demjanjuk (this photo adorns the book’s cover), and accurately reported his date of birth, birthplace, father’s name and identifying features. Trawniki ID 1393 listed Demjanjuk’s service at Sobibor, but not Treblinka. That, Israeli prosecutors explained, was because Sobibor had been his initial assignment at the time of issuance of card.

          Demjanjuk’s defense was that that he had not served at Treblinka, but his testimony was so riddled with holes and contradictions that the three experienced judges of the court – the fact finders in the proceeding; there was no jury – accepted in full the survivors’ testimony and sentenced Demjanjuk to death in 1988.  The death sentence triggered an automatic appeal to the five-judge Israeli Supreme Court (Eichmann was the only other defendant ever sentenced to death by an Israeli court). The appellate hearing did not take place until 1990, and benefitted from a trove of documents released by the Soviet Union during its period of glasnost (openness) prior to its collapse in 1991.

      The Soviet documents contained a “rather complete” (p.94) picture of Demjanjuk’s wartime service, confirming his work as a camp guard at Sobibor and showing that he had also served at three other camps, Okzawm, Majdanek and Flossenberg, but with no mention of service at Treblinka.  Moreover, the Soviet documentation pointed inescapably to another man, Ivan Marchenko, as Treblinka’s Ivan the Terrible. In 1993, six years after the Jerusalem trial had begun, the Israeli Supreme Court issued a 400-page opinion in which it vacated the conviction. Although the court could have remanded the case for consideration of Demjanjuk’s service at other camps, it pointedly refused to do so. Restarting proceedings “does not seem to us reasonable” (p.110), the court concluded.  OSI, however, took a different view.

* * *

            Although Demjanjuk’s US citizenship was restored in 1998, OSI determined that neither his advancing age – he was then nearly 80 – nor his partial exoneration in Jerusalem after protracted proceedings was sufficient to allow him to escape being called to account for his service at Sobibor. Notwithstanding the rebuke from the federal court of appeals for its handling of the initial D & D proceedings, OSI in 2001 instituted another round of proceedings against Demjanjuk, 20 years after the first round. Everyone at OSI, Douglas writes, “recognized the hazards in seeking to denaturalize Demjanjuk a second time. The Demjanjuk disaster continued to cast a long shadow over the unit, marring its otherwise impressive record of success” (p.126). By this time, however, OSI had assembled a team of professional historians who had “redefined our historical understanding of the SS’s process of recruiting and training the auxiliaries who crucially assisted in genocide” (p.126). The work of the OSI historians proved pivotal in the second round of D & D proceedings, which terminated in 2008 with a ruling that Demjanjuk be removed from the United States; and pivotal in persuading a reluctant Germany to request that Demjanjuk be extradited to stand trial in Munich.

            The German criminal justice system at the time of Demjanjuk’s extradition was inherently cautious and rule bound — perhaps the epitome of what a normal legal system should be in normal times and very close to what the victorious Allies would have hoped for in 1945 as they set out to gradually transfer criminal justice authority to the vanquished country. But, as Douglas shows, that system prior to the Demjanjuk trial was poorly equipped to deal with the enormity of the Nazi crimes committed in the name of the German state. Numerous German legal conceptions constituted obstacles to successful prosecutions of former Nazis and their accomplices.

          After Germany regained its sovereignty after World War II and became responsible for its own criminal justice system, it “tenaciously insisted that Nazi atrocities be treated as ordinary crimes, requiring no special courts, procedures, or law to bring their perpetrators to justice” (p.20). Service in a Nazi camp, by itself, did not constitute a crime under German law.  A guard could be tried as an accessory to murder, but only if his acts could be linked to specific killings. There was also the issue of the voluntariness of one’s service in a Nazi camp. The German doctrine of “putative necessity” allowed a defendant to show that he entertained a reasonable belief that he had no choice but to engage in criminal acts.

            In the Munich trial, the prosecution’s case turned “less on specific evidence of what John Demjanjuk did than on historical evidence about what people in Demjanjuk’s position must have done” (p.218) at Sobibor which, like Treblinka, had been a pure exterminaton facility whose only function was to kill its prison population.  With Demjanjuk’s service at Sobibor established beyond dispute, but without evidence that he had “killed with his own hand” (p.218), the prosecution in Munich presented a “full narrative history of how the camp and its guards functioned . . . [through a] comprehensive historical study of Sobibor and its Trawniki-trained guards” (p.219).

          Historical research developed by OSI historians and presented to the Munich court demonstrated that Trawniki guards “categorically ceased to be POWs once they entered Trawniki” (p.226). They were paid and received regular days off, paid home leave and medical care. They were issued firearms and were provided uniforms. The historical evidence thus demonstrated that the difference between the death-camp inmates and the Trawnikis who guarded them was “stark and unequivocal” (p.226).  Far from being “glorified prisoners,” Trawniki-trained guards were “vital and valued assistants in genocide” (p.228). The historical evidence further showed that all guards at Sobibor were “generalists.” They rotated among different functions, such as guarding the camp’s perimeter and managing a “well-rehearsed process of extermination.” All “facilitated the camp’s function: the mass killings of Jews” (p.220).

         Historical evidence further demolished the “putative necessity” defense, in which the defendant entertained a reasonable belief that he would face the direst consequences if he did not participate in the camp’s activities. An “extraordinary research effort was dedicated to exploring the question of duress, and the results were astonishing: historians failed to uncover so much as a single instance in which a German officer or NCO faced ‘dire punishment’ for opting out of genocide” (p.223).  The historical evidence thus provided the foundation for the Munich court to find Demjanjuk guilty as an accessory to murder. He was sentenced to five years imprisonment but released to a Bavarian nursing home pending appeal. Ten months later, on March 17, 2012, he died. Because his appeal was never heard, his lawyer was able to argue that his conviction had no legal effect and that Demjanjuk had died an innocent man.

           The Munich court’s holding that Demjanjuk had been an accessory to murder underscored the value of years of historical research. As Douglas writes:

Without the painstaking archival work and interpretative labors of the OSI’s historians, the court could never have confidently reached its two crucial findings: that in working as a Trawniki at Sobibor, Demjanjuk had necessarily served as an accessory to murder; and that in choosing to remain in service when others chose not to, he acted voluntarily. This “trial by history” enabled the court to master the prosecutorial problem posed by the auxiliary to genocide who operates invisibly in an exterminatory apparatus (p.255-56).

          In the aftermath of Demjanjuk’s conviction, German prosecutors considered charging as many as 30 still living camp guards. One, Oskar Gröning, a former SS guard at Auschwitz, was convicted in 2015, in Lüneburg, near Hamburg.  Gröning admitted in open court that it was “beyond question that I am morally complicit. . . This moral guilt I acknowledge here, before the victims, with regret and humility” (p.258).  Gröning’s trial “would never have been possible without Demjanjuk’s conviction” (p.258), Douglas indicates. Camp guards such Demjanjuk and Gröning were convicted “not because they committed wanton murders, but because they worked in factories of death” (p.260).

* * *

        Thirty years elapsed between Demjanjuk’s initial D & D proceedings in the United States in 1981 and the trial court’s verdict in Munich in 2011. Douglas acknowledges that the decision to seek to denaturalize Demjanjuk a second time and try him in Munich after the spectacularly botched trial in Jerusalem could be seen as prosecutorial overreach.  But despite these misgivings, Douglas strongly supports the Munich verdict: “not because I believe it was vital to punish Demjanjuk, but because the German court delivered a remarkable and just decision, one which few observers would have predicted from Germany’s long legal struggle with the legacy of Nazi genocide” (p.15).   Notwithstanding all the conceptual obstacles created by a legal system that treated the Holocaust as an “ordinary crime,” German courts in Demjanjuk’s case “managed to comprehend the Holocaust as a crime of atrocity” (p.260).  Demjanjuk’s conviction therefore serves as a reminder, Douglas concludes, that the Holocaust was “not accomplished through the acts of Nazi statesmen, SS henchmen, or vicious sociopaths alone. It was [also] made possible by the thousands of lowly foot soldiers of genocide. Through John Demjanjuk, they were at last brought to account” (p.257).

Thomas H. Peebles

Washington, D.C.

July 10, 2017



Filed under German History, History, Israeli History, Rule of Law, United States History

Honest Broker



Michael Doran, Ike’s Gamble:

America’s Rise to Dominance in the Middle East 


       On July 26, 1956, Egypt’s President Gamal Abdel Nasser stunned the world by announcing the nationalization of the Suez Canal, a critical conduit through Egypt for the transportation of oil between the Mediterranean Sea and the Indian Ocean. Constructed between 1859 and 1869, the canal was owned by the Anglo-French Suez Canal Company. What followed three months later was the Suez Crisis of 1956: on October 29, Israeli brigades invaded Egypt across its Sinai Peninsula, advancing to within ten miles of the canal.  Britain and France, following a scheme concocted with Israel to retake the canal and oust Nasser, demanded that both Israeli and Egyptian troops withdraw from the occupied territory. Then, on November 5th, British and French forces invaded Egypt and occupied most of the Canal Zone, the territory along the canal. The United States famously opposed the joint operation and, through the United Nations, forced Britain and France out of Egypt.  Nearly simultaneously, the Soviet Union ruthlessly suppressed an uprising in Hungary.

       The autumn of 1956 was thus a tumultuous time. Across the globe, it was a time when colonies were clamoring for and achieving independence from former colonizers, and the United States and the Soviet Union were competing for the allegiance of emerging states in what was coming to be known as the Third World.  In the volatile and complex Middle East, it was a time of rising nationalism. Nasser, a wildly ambitious general who came to power after a 1952 military coup had deposed the King of Egypt, aspired to become not simply the leader of his country but also of the Arab speaking world, even the entire Muslim world.  By 1956, Nasser had emerged as the region’s most visible nationalist. But he was far from the only voice in the Middle East seeking to speak for Middle East nationalism. Syria, Jordan, Lebanon and Iraq were also imbued with the rising spirit of nationalism and saw Nasser as a rival, not a fraternal comrade-in-arms.

       Michael Doran’s Ike’s Gamble: America’s Rise to Dominance in the Middle East provides background and context for the United States’ decision not to support Britain, France and Israel during the 1956 Suez crisis. As his title suggests, Doran places America’s President, war hero and father figure Dwight D. Eisenhower, known affectionately as Ike, at the middle of the complicated Middle East web (although Nasser probably merited a place in Doran’s title: “Ike’s Gamble on Nasser” would have better captured the spirit of the narrative). Behind the perpetual smile, Eisenhower was a cold-blooded realist who was “unshakably convinced” (p.214) that the best way to advance American interests in the Middle East and hold Soviet ambitions in check was for the United States to play the role of an “honest broker” in the region, sympathetic to the region’s nationalist aspirations and not too closely aligned with its traditional allies Britain and France, or with the young state of Israel.

       But Doran, a senior fellow at the Hudson Institute and former high level official at the National Security Council and Department of Defense in the administration of George W. Bush, goes on to argue that Eisenhower’s vision of the honest broker – and his “bet” on Nasser – were undermined by the United States’ failure to recognize the “deepest drivers of the Arab and Muslim states, namely their rivalries with each other for power and authority” (p.105). Less than two years after taking Nasser’s side in the 1956 Suez Crisis, Eisenhower seemed to reverse himself.  By mid-1958, Doran reveals, Eisenhower had come to regret his bet on Nasser and his refusal to back Britain, France and Israel during the crisis. Eisenhower kept this view largely to himself, however, distorting the historical picture of his Middle East policies.

        Although Doran considers Eisenhower “one of the most sophisticated and experienced practitioners of international politics ever to reside in the White House,” the story of his relationship with Nasser is at bottom a lesson in the “dangers of calibrating the distinction between ally and enemy incorrectly” (p.13).  Or, as he puts it elsewhere, Eisenhower’s “bet” on Nasser’s regime is a “tale of Frankenstein’s monster, with the United States as the mad scientist and the new regime as his uncontrollable creation” (p.10).

* * *

      The “honest broker” approach to the Middle East dominated the Eisenhower administration from its earliest days in 1953. Eisenhower, his Secretary of State John Foster Dulles, and most of their key advisors shared a common picture of the volatile region. Trying to wind down a war in Korea they had inherited from the Truman Administration, they considered the Middle East the next and most critical region of confrontation in the global Cold War between the Soviet Union and the United States.  As they saw it, in the Middle East the United States found itself caught between Arabs and other “indigenous” nationalities on one side, and the British, French, and Israelis on the other. “Each side had hold of one arm of the United States, which they were pulling like a tug rope. The picture was so obvious to almost everyone in the Eisenhower administration that it was understood as an objective description of reality” (p.44). It is impossible, Doran writes, to exaggerate the “impact that the image of America as an honest broker had on Eisenhower’s thought . . . The notion that the top priority of the United States was to co-opt Arab nationalists by helping them extract concessions – within limits – from Britain and Israel was not open to debate. It was a view that shaped all other policy proposals” (p.10).

         Alongside Ike’s “bet” on Nasser, the book’s second major theme is the deterioration of the famous “special relationship” between Britain and the United States during Eisenhower’s first term, due in large measure to differences over Egypt, the Suez Canal, and Nasser (and, to quibble further with the book’s title, “Britain’s Fall from Power in the Middle East” in my view would have captured the spirit of the narrative better than “America’s Rise to Dominance in the Middle East”).  The Eisenhower administration viewed Britain’s once mighty empire as a relic of the past, out of place in the post World War II order. It viewed Britain’s leader, Prime Minister Winston Churchill, in much the same way. Eisenhower entered his presidency convinced that it was time for Churchill, then approaching age 80, to exit the world stage and for Britain to relinquish control of its remaining colonial possessions – in Egypt, its military base and sizeable military presence along the Suez Canal.

      Anthony Eden replaced Churchill as prime minister in 1955.  A leading anti-appeasement ally of Churchill in the 1930s, by the 1950s Eden shared Eisenhower’s view that Churchill had become a “wondrous relic” who was “stubbornly clinging to outmoded ideas” (p.20) about Britain’s empire and its place in the world.  Although interested in aligning Britain’s policies with the realities of the post World War II era, Eden led the British assault on Suez in 1956.  With  “his career destroyed” (p.202), Eden was forced to resign early in 1957.

       If the United States today also has a “special relationship” with Israel, that relationship had yet to emerge during the first Eisenhower term.  Israel’s circumstances were of course entirely different from those of Britain and France, a young country surrounded by Arab-speaking states implacably hostile to its very existence. President Truman had formally recognized Israel less than a decade earlier, in 1948.  But substantial segments of America’s foreign policy establishment in the 1950s continued to believe that such recognition had been in error. Not least among them was John Foster Dulles, Eisenhower’s Secretary of State.  There seemed to be more than a whiff of anti-Semitism in Dulles’ antagonism toward Israel.

        Describing Israel as the “darling of Jewry throughout the world” (p.98), Dulles decried the “potency of international Jewry” (p.98) and warned that the United States should not be seen as a “backer of expansionist Zionism” (p.77).  For the first two years of the Eisenhower administration, Dulles followed a policy designed to “’deflate the Jews’ . . . by refusing to sell arms to Israel, rebuffing Israeli requests for security guarantees, and diminishing the level of financial assistance to the Jewish state” (p.99).   Dulles’ views were far from idiosyncratic. Israel “stirred up deep hostility among the Arabs” and many of America’s foreign policy elites in the 1950s ”saw Israel as a liability” (p.9). Without success, the United States sought Nasser’s agreement to an Arab-Israeli accord which would have required limited territorial concessions from Israel.

       Behind the scenes, however, the United States brokered a 1954 Anglo-Egyptian agreement, by which Britain would withdraw from its military base in the Canal Zone over an 18-month period, with Egypt agreeing that Britain could return to its base in the event of a major war. Doran terms this Eisenhower’s “first bet” on Nasser. Ike “wagered that the evacuation of the British from Egypt would sate Nasser’s nationalist appetite. The Egyptian leader, having learned that the United States was willing and able to act as a strategic partner, would now keep Egypt solidly within the Western security system. It would not take long before Eisenhower would come to realize that Nasser’s appetite only increased with eating” (p.67-68).

        As the United States courted Nasser as a voice of Arab nationalism and a bulwark against Soviet expansion into the region, it also encouraged other Arab voices. In what the United States imprecisely termed the “Northern Tier,” it supported security pacts between Turkey and Iraq and made overtures to Egypt’s neighbors Syria and Jordan. Nasser adamantly opposed these measures, considering them a means of constraining his own regional aspirations and preserving Western influence through the back door.  The “fatal intellectual flaw” of the United States’ honest broker strategy, Doran argues, was that it “imagined the Arabs and Muslims as a unified bloc. It paid no attention whatsoever to all of the bitter rivalries in the Middle East that had no connection to the British and Israeli millstones. Consequently, Nasser’s disputes with his rivals simply did not register in Washington as factors of strategic significance” (p.78).

           In September 1955, Nasser shocked the United States by concluding an agreement to buy arms from the Soviet Union, through Czechoslovakia, one of several indications that he was at best playing the West against the Soviet Union, at worst tilting toward the Soviet side.  Another came in May 1956, when Egypt formally recognized Communist China. In July 1956, partially in reaction to Nasser’s pro-Soviet dalliances, Dulles informed the Egyptian leader that the United States was pulling out of a project to provide funding for a dam across the Nile River at Aswan, Nasser’s “flagship development project . . . [which was] expected to bring under cultivation hundreds of thousands of acres of arid land and to generate millions of watts of electricity” (p.167).

         Days later, Nasser countered by announcing the nationalization of the Suez Canal, predicting that the tolls collected from ships passing through the canal would pay for the dam’s construction within five years. Doran characterizes Nasser’s decision to nationalize the canal as the “single greatest move of his career.” It is impossible to exaggerate, he contends, the “power of the emotions that the canal takeover stirred in ordinary Egyptians. If Europeans claimed that the company was a private concern, Egyptians saw it as an instrument of imperial exploitation – ‘a state within a state’. . . [that was] plundering a national asset for the benefit of France and Britain” (p.171).

            France, otherwise largely missing in Doran’s detailed account, concocted the scheme that led to the October 1956 crisis.  Concerned that Nasser was providing arms to anti-French rebels in Algeria, France proposed to Israel what Doran terms a “stranger than fiction” (p.189) plot by which the Israelis would invade Egypt. Then, in order to protect shipping through the canal, France and Britain would:

issue an ultimatum demanding that the belligerents withdraw to a position of ten miles on either side of the canal, or face severe consequences. The Israelis, by prior arrangement, would comply. Nasser, however, would inevitably reject the ultimatum, because it would leave Israeli forces inside Egypt while simultaneously compelling Egyptian forces to withdraw from their own sovereign territory. An Anglo-French force would then intervene to punish Egypt for noncompliance. It would take over the canal and, in the process, topple Nasser (p.189).

The crisis unfolded more or less according to this script when Israeli brigades invaded Egypt on October 29th and Britain and France launched their joint invasion on November 5th. Nasser sunk ships in the canal and blocked oil tankers headed through the canal to Europe.

         Convinced that acquiescence in the invasion would drive the entire Arab world to the Soviet side in the global Cold War, the United States issued measured warnings to Britain and France to give up their campaign and withdraw from Egyptian soil. If Nasser was by then a disappointment to the United States, Doran writes, the “smart money was still on an alliance with moderate nationalism, not with dying empires” (p.178). But when Eden telephoned the White House on November 7, 1956, largely to protest the United States’ refusal to sell oil to Britain, Ike went further. In that phone call, Eisenhower as honest broker “decided that Nasser must win the war, and that he must be seen to win” (p.249).  Eisenhower’s hardening toward his traditional allies a week into the crisis, Doran contends, constituted his “most fateful decision of the Suez Crisis: to stand against the British, French, and Israelis in [a] manner that was relentless, ruthless, and uncompromising . . . [Eisenhower] demanded, with single-minded purpose, the total and unconditional British, French, and Israeli evacuation from Egypt. These steps, not the original decision to oppose the war, were the key factors that gave Nasser the triumph of his life” (p.248-49).

        When the financial markets caught wind of the blocked oil supplies, the value of the British pound plummeted and a run on sterling reserves ensued. “With his currency in free fall, Eden became ever more vulnerable to pressure from Eisenhower. Stabilizing the markets required the cooperation of the United States, which the Americans refused to give until the British accepted a complete, immediate, and unconditional withdrawal from Egypt” (p.196). At almost the same time, Soviet tanks poured into Budapest to suppress a burgeoning Hungarian pro-democracy movement. The crisis in Eastern Europe had the effect of “intensifying Eisenhower’s and Dulles’s frustration with the British and the French. As they saw it, Soviet repression in Hungary offered the West a prime opportunity to capture the moral high ground in international politics – an opportunity that the gunboat diplomacy in Egypt was destroying” (p.197). The United States supported a United Nations General Assembly resolution calling for an immediate ceasefire and withdrawal of invading troops. Britain, France and Israel had little choice bu to accept these terms in December 1956.

       In the aftermath of the Suez Crisis, the emboldened Nasser continued his quest to become the region’s dominant leader. In February 1958, he engineered the formation of the United Arab Republic, a political union between Egypt and Syria that he envisioned as the first step toward a broader pan-Arab state (in fact, the union lasted only until 1961). He orchestrated a coup in Iraq in July 1958. Later that month, Eisenhower sent American troops into Lebanon to avert an Egyptian-led uprising against the pro-western government of Christian president Camille Chamoun. Sometime in the period between the Suez Crisis of 1956 and the intervention in Lebanon in 1958, Doran argues, Eisenhower withdrew his bet on Nasser, coming to the view that his support of Egypt during the 1956 Suez crisis had been a mistake.

        The Eisenhower of 1958 “consistently and clearly argued against embracing Nasser” (p.231).  He now viewed Nasser as a hardline opponent of any reconciliation between Arabs and Israel, squarely in the Soviet camp. Eisenhower, a “true realist with no ideological ax to grind,” came to recognize that his Suez policy of “sidelining the Israelis and the Europeans simply did not produce the promised results. The policy was . . . a blunder” (p.255).   Unfortunately, Doran argues, Eisenhower kept his views to himself until well into the 1960s and few historians picked up on his change of mind. This allowed those who sought to distance United States policy from Israel to cite Eisenhower’s stance in the 1956 Suez Crisis, without taking account of Eisenhower’s later reconsideration of that stance.

* * *

      Doran relies upon an extensive mining of diplomatic archival sources, especially those of the United States and Great Britain, to piece together this intricate depiction of the Eisenhower-Nasser relationship and the 1956 Suez Crisis. These sources allow Doran to emphasize the interactions of the key actors in the Middle East throughout the 1950s, including personal animosities and rivalries, and intra-governmental turf wars.  He writes in a straightforward, unembellished style. Helpful subheadings within each chapter make his detailed and sometimes dense narrative easier to follow. His work will appeal to anyone who has worked in an Embassy overseas, to Middle East and foreign policy wonks, and to general readers with an interest in the 1950s.

Thomas H. Peebles

Saint Augustin-de-Desmaures

Québec, Canada

June 19, 2017


Filed under American Politics, British History, Uncategorized, United States History, World History

Portrait of a President Living on Borrowed Time

Joseph Lelyveld, His Final Battle:

The Last Months of Franklin Roosevelt 

            During the last year and a half of his life, from mid-October 1943 to his death in Warm Springs, Georgia on April 12, 1945, Franklin D. Roosevelt’s presidential plate was full, even overflowing. He was grappling with winning history’s most devastating  war and structuring a lasting peace for the post-war global order, all the while tending to multiple domestic political demands. But Roosevelt spent much of this time out of public view in semi-convalescence, often in locations outside Washington, with limited contact with the outside world. Those who met the president, however, noticed a striking weight loss and described him with words like “listless,” “weary,” and “easily distracted.” We now know that Roosevelt had life-threatening high blood pressure, termed malignant hypertension, making him susceptible to a stroke or coronary attack at any moment. Roosevelt’s declining health was carefully shielded from the public and only rarely discussed directly, even within his inner circle. At the time, probably not more than a handful of doctors were aware of the full gravity of Roosevelt’s physical condition, and it is an open question whether Roosevelt himself was aware.

In His Final Battle: The Last Months of Franklin Roosevelt, Joseph Lelyveld, former executive editor of the New York Times, seeks to shed light upon, if not answer, this open question. Lelyveld suggests that the president likely was more aware than he let on of the implications of his declining physical condition. In a resourceful portrait of America’s longest serving president during his final year and a half, Lelyveld considers Roosevelt’s political activities against the backdrop of his health. The story is bookended by Roosevelt’s meetings to negotiate the post-war order with fellow wartime leaders Winston Churchill and Joseph Stalin, in Teheran in December 1943 and at Yalta in the Crimea in February 1945. Between the two meetings came Roosevelt’s 1944 decision to run for an unprecedented fourth term, a decision he reached just weeks prior to the Democratic National Convention that summer, and the ensuing campaign.

Lelyveld’s portrait of a president living on borrowed time emerges from an excruciatingly thin written record of Roosevelt’s medical condition. Roosevelt’s medical file disappeared without explanation from a safe at Bethesda Naval Hospital shortly after his death.   Unable to consider Roosevelt’s actual medical records, Lelyveld draws clues  concerning his physical condition from the diary of Margaret “Daisy” Suckley, discovered after Suckley’s death in 1991 at age 100, and made public in 1995. The slim written record on Roosevelt’s medical condition limits Lelyveld’s ability to tease out conclusions on the extent to which that condition may have undermined his job performance in his final months.

* * *

            Daisy Suckley, a distant cousin of Roosevelt, was a constant presence in the president’s life in his final years and a keen observer of his physical condition. During Roosevelt’s last months, the “worshipful” (p.3) and “singularly undemanding” Suckley had become what Lelyveld terms the “Boswell of [Roosevelt’s] rambling ruminations,” secretly recording in an “uncritical, disjointed way the hopes and daydreams” that occupied the frequently inscrutable president (p.75). By 1944, Lelyfeld notes, there was “scarcely a page in Daisy’s diary without some allusion to how the president looks or feels” (p.77).   Lelyveld relies heavily upon the Suckley diary out of necessity, given the disappearance of Roosevelt’s actual medical records after his death.

Lelyveld attributes the disappearance to Admiral Ross McIntire, an ears-nose-and-throat specialist who served both as Roosevelt’s personal physician and Surgeon General of the Navy. In the latter capacity, McIntire oversaw a wartime staff of 175,000 doctors, nurses and orderlies at 330 hospitals and medical stations around the world. Earlier in his career, Roosevelt’s press secretary had upbraided McIntire for allowing the president to be photographed in his wheel chair. From that point forward, McIntire understood that a major component of his job was to conceal Roosevelt’s physical infirmities and protect and promote a vigorously healthy public image of the president. The “resolutely upbeat” (p.212) McIntire, a master of “soothing, well-practiced bromides” (p.226), thus assumes a role in Lelyveld’s account which seems as much “spin doctor” as actual doctor. His most frequent message for the public was that the president was in “robust health” (p.22), in the process of “getting over” a wide range of lesser ailments such as a heavy cold, flu, or bronchitis.

A key turning point in Lelyveld’s story occurred in mid-March 1944, 13 months prior to Roosevelt’s death, when the president’s daughter Anna Roosevelt Boettiger confronted McIntire and demanded to know more about what was wrong with her father. McIntire doled out his “standard bromides, but this time they didn’t go down” (p.23). Anna later said that she “didn’t think McIntire was an internist who really knew what he was talking about” (p.93). In response, however, McIntire brought in Dr. Howard Bruenn, the Navy’s top cardiologist. Evidently, Lelyveld writes, McIntire had “known all along where the problem was to be found” (p.23). Breunn was apparently the first cardiologist to have examined Roosevelt.

McIntire promised to have Roosevelt’s medical records delivered to Bruenn prior to his initial examination of the president, but failed to do so, an “extraordinary lapse” (p.98) which Lelyveld regards as additional evidence that McIntire was responsible for the disappearance of those records after Roosevelt’s death the following year. Breunn found that Roosevelt was suffering from “acute congestive heart failure” (p.98). He recommended that the wartime president avoid “irritation,” severely cut back his work hours, rest more, and reduce his smoking habit, then a daily pack and a half of Camel’s cigarettes. In the midst of the country’s struggle to defeat Nazi Germany and imperial Japan, its leader was told that he “needed to sleep half his time and reduce his workload to that of a bank teller” (p.99), Lelyveld wryly notes.  Dr. Bruenn saw the president regularly from that point onward, traveling with him to Yalta in February 1945 and to Warm Springs in April of that year.

Ten days after Dr. Bruenn’s diagnosis, Roosevelt told a newspaper columnist, “I don’t work so hard any more. I’ve got this thing simplified . . . I imagine I don’t work as many hours a week as you do” (p.103). The president, Lelyveld concludes, “seems to have processed the admonition of the physicians – however it was delivered, bluntly or softly – and to be well on the way to convincing himself that if he could survive in his office by limiting his daily expenditure of energy, it was his duty to do so” (p.103).

At that time, Roosevelt had not indicated publicly whether he wished to seek a 4th precedential term and had not discussed this question with any of his advisors. Moreover, with the “most destructive military struggle in history approaching its climax, there was no one in the White House, or his party, or the whole of political Washington, who dared stand before him in the early months of 1944 and ask face-to-face for a clear answer to the question of whether he could contemplate stepping down” (p.3). The hard if unspoken political truth was that Roosevelt was the Democratic party’s only hope to retain the White House. There was no viable successor in the party’s ranks. But his re-election was far from assured, and public airing of concerns about his health would be unhelpful to say the least in his  re-election bid. Roosevelt did not make his actual decision to run until just weeks before the 1944 Democratic National Convention in Chicago.

At the convention, Roosevelt’s then vice-president, Henry Wallace, and his counselors Harry Hopkins, and Jimmy Byrnes jockeyed for the vice-presidential nomination, along with William Douglas, already a Supreme Court justice at age 45. There’s no indication that Senator Harry S. Truman actively sought to be Roosevelt’s running mate. Lelyveld writes that it is tribute to FDR’s “wiliness” that the notion has persisted over the years that he was “only fleetingly engaged in the selection” of his 1944 vice-president and that he was “simply oblivious when it came to the larger question of succession” (p.172). To the contrary, although he may not have used the used the word “succession” in connection with his vice-presidential choice, Roosevelt “cared enough about qualifications for the presidency to eliminate Wallace as a possibility and keep Byrnes’s hopes alive to the last moment, when, for the sake of party unity, he returned to Harry Truman as the safe choice” (p.172-73).

Having settled upon Truman as his running mate, Roosevelt indicated that he did not want to campaign as usual because the war was too important. But campaign he did, and Lelyveld shows how hard he campaigned – and how hard it was for him given his deteriorating health, which aggravated his mobility problems. The outcome was in doubt up until Election Day, but Roosevelt was resoundingly reelected to a fourth presidential term. The president could then turn his full attention to the war effort, focusing both upon how the war would be won and how the peace would be structured. Roosevelt’s foremost priority was structuring the peace; the details on winning the war were largely left to his staff and to the military commanders in the field.

Roosevelt badly wanted to avoid the mistakes that Woodrow Wilson had made after World War I. He was putting together the pieces of an organization already referred to as the United Nations and fervently sought  the participation and support of his war ally, the Soviet Union. He also wanted Soviet support for the war against Japan in the Pacific after the Nazi surrender, and for an independent and democratic Poland. In pursuit of these objectives, Roosevelt agreed to travel over 10,000 arduous miles to Yalta, to meet in February 1945 with Stalin and Churchill.

In Roosevelt’s mind, Stalin  was by then both the key to victory on the battlefield and for a lasting peace afterwards — and he was, in Roosevelt’s phrase, “get-at-able” (p.28) with the right doses of the legendary Roosevelt charm.   Roosevelt had begun his serious courtship of the Soviet leader at their first meeting in Teheran in December 1943.  His fixation on Stalin, “crossing over now and then into realms of fantasy” (p.28), continued at Yalta. Lelyveld’s treatment of Roosevelt at Yalta covers similar ground to that in Michael Dobbs’ Six Months That Shook the World, reviewed here in April 2015. In Lelyveld’s account, as in that of Dobbs, a mentally and physical exhausted Roosevelt at Yalta ignored the briefing books his staff prepared for him and relied instead upon improvisation and his political instincts, fully confident that he could win over Stalin by force of personality.

According to cardiologist Bruenn’s memoir, published a quarter of a century later, early in the conference Roosevelt showed worrying signs of oxygen deficiency in his blood. His habitually high blood pressure readings revealed a dangerous condition, pulsus alternans, in which every second heartbeat was weaker than the preceding one, a “warning signal from an overworked heart” (p.270).   Dr. Bruenn ordered Roosevelt to curtail his activities in the midst of the conference. Churchill’s physician, Lord Moran, wrote that Roosevelt had “all the symptoms of hardening of arteries in the brain” during the conference and gave the president “only a few months to live” (p.270-71). Churchill himself commented that his wartime ally “really was a pale reflection almost throughout” (p.270) the Yalta conference.

Yet, Roosevelt recovered sufficiently to return home from the conference and address Congress and the public on its results, plausibly claiming victory. The Soviet Union had agreed to participate in the United Nations and in the war in Asia, and to hold what could be construed as free elections in Poland. Had he lived longer, Roosevelt would have seen that Stalin delivered as promised on the Asian war. The Soviet Union also became a member of the United Nations and maintained its membership in the organization until its dissolution in 1991, but was rarely if ever the partner Roosevelt envisioned in keeping world peace. The possibility of a democratic Poland, “by far the knottiest and most time-consuming issue Roosevelt confronted at Yalta” (p.285), was by contrast slipping away even before Roosevelt’s death.

At one point in his remaining weeks, Roosevelt exclaimed, “We can’t do business with Stalin. He has broken every one of the promises he made at Yalta” on Poland (p.304; Dobbs includes the same quotation, adding that Roosevelt thumped on his wheelchair at the time of this outburst). But, like Dobbs, Lelyveld argues that even a more physically fit, fully focused and coldly realistic Roosevelt would likely have been unable to save Poland from Soviet clutches. When the allies met at Yalta, Stalin’s Red Army was in the process of consolidating military control over almost all of Polish territory.  If Roosevelt had been at the peak of vigor, Lelyveld concludes, the results on Poland “would have been much the same” (p.287).

Roosevelt was still trying to mend fences with Stalin on April 11, 1945, the day before his death in Warm Springs. Throughout the following morning, Roosevelt worked on matters of state: he received an update on the US military advances within Germany and even signed a bill, sustaining the Commodity Credit Corporation. Then, just before lunch Roosevelt collapsed. Dr. Bruenn arrived about 15 minutes later and diagnosed a hemorrhage in the brain, a stroke likely caused by the bursting of a blood vessel in the brain or the rupture of an aneurysm. “Roosevelt was doomed from the instant he was stricken” (p.323).  Around midnight, Daisy Suckley recorded in her diary that the president had died at 3:35 pm that afternoon. “Franklin D. Roosevelt, the hope of the world, is dead,” (p.324), she wrote.

Daisy was one of several women present at Warm Springs to provide company to the president during his final visit. Another was Eleanor Roosevelt’s former Secretary, Lucy Mercer Rutherford, by this time the primary Other Woman in the president’s life. Rutherford had driven down from South Carolina to be with the president, part of a recurring pattern in which Rutherford appeared in instances when wife Eleanor was absent, as if coordinated by a social secretary with the knowing consent of all concerned. But this orchestration broke down in Warm Springs in April 1945. After the president died, Rutherford had to flee in haste to make room for Eleanor. Still another woman in the president’s entourage, loquacious cousin Laura Delano, compounded Eleanor’s grief by letting her know that Rutherford had been in Warm Springs for the previous three days, adding gratuitously that Rutherford had also served as hostess at occasions at the White House when Eleanor was away. “Grief and bitter fury were folded tightly in a large knot” (p.325) for the former First Lady at Warm Springs.

Subsequently, Admiral McIntire asserted that Roosevelt had a “stout heart” and that his blood pressure was “not alarming at any time” (p.324-25), implying that the president’s death from a stroke had proven that McIntire had “always been right to downplay any suggestion that the president might have heart disease.” If not a flat-out falsehood, Lelyveld argues, McIntire’s assertion “at least raises the question of what it would have taken to alarm him” (p.325). Roosevelt’s medical file by this time had gone missing from the safe at Bethesda Naval Hospital, most likely removed by the Admiral because it would have revealed the “emptiness of the reassurances he’d fed the press and the public over the years, whenever questions arose about the president’s health” (p.325).

* * *

           Lelyveld declines to engage in what he terms an “argument without end” (p.92) on the degree to which Roosevelt’s deteriorating health impaired his job performance during his last months and final days. Rather, he  skillfully pieces together the limited historical record of Roosevelt’s medical condition to add new insights into the ailing but ever enigmatic president as he led his country nearly to the end of history’s most devastating war.


Thomas H. Peebles

La Châtaigneraie, France

March 28, 2017





Filed under American Politics, Biography, European History, History, United States History, World History

High Point of Modern International Economic Diplomacy

Ed Conway, The Summit: Bretton Woods 1944,

J.M. Keynes and the Reshaping of the Global Economy 

               During the first three weeks of July 1944, as World War II raged on the far sides of the Atlantic and Pacific oceans, 730 delegates from 44 countries gathered at the Mount Washington Hotel in Northern New Hampshire for what has come to be known as the Bretton Woods conference. The conference’s objective was audacious: create a new and more stable framework for the post-World War II monetary order, with the hope of avoiding future economic upheavals like the Great Depression of the 1930s.   To this end, the delegates reconsidered and in many cases rewrote some of the most basic rules of international finance and global capitalism, such as how money should flow between sovereign states, how exchange rates should interact, and how central banks should set interest rates. The conference took place at the venerable but aging Mount Washington Hotel, in an area informally known as Bretton Woods, not far from Mount Washington itself, Eastern United States’ highest peak.

In The Summit, Bretton Woods, 1944: J.M. Keynes and the Reshaping of the Global Economy, Ed Conway, formerly economics editor for Britain’s Daily Telegraph and Sunday Telegraph and presently economics editor for Sky News, provides new and fascinating detail about the conference. The word “summit” in his title carries a triple sense: it refers to Mount Washington and to the term that came into use in the following decade for a meeting of international leaders. But Conway also contends that the Bretton Woods conference now appears to have been another sort of summit. The conference marked the “only time countries ever came together to remold the world’s monetary system” (p.xx).  It stands in history as the “very highest point of modern international economic diplomacy” (p.xxv).

Conway differentiates his work from others on Bretton Woods by focusing on the interactions among the delegates and the “sheer human drama” (p.xxii) of the event.  As the sub-title indicates, British economist John Maynard Keynes is forefront among these delegates. Conway could have added to his subtitle the lesser-known Harry Dexter White, Chief International Economist at the US Treasury Department and Deputy to Treasury Secretary Henry Morgenthau, the head of the US delegation and formal president of the conference.  White’s name in the subtitle would have underscored that this book is a story about  the relationship between the two men who assumed de facto leadership of the conference. But the book is also a story about the uneasy relationship at Bretton Woods between the United States and the United Kingdom, the conference’s two lead delegations.

Although allies in the fight against Nazi Germany, the two countries were far from allies at Bretton Woods.  Great Britain, one of the world’s most indebted nations, came to the conference unable to pay for its own defense in the war against Nazi Germany and unable to protect and preserve its vast worldwide empire.  It was utterly outmatched at Bretton Woods by an already dominant United States, its principal creditor, which had little interest in providing debt relief to Britain or helping it maintain an empire. Even the force of Keynes’ dominating personality was insufficient to give Britain much more than a supplicant’s role at Bretton Woods.

Conway’s book also constitutes a useful and understandable historical overview of the international monetary order from pre-World War I days up to Bretton Woods and beyond.  The overview revolves around the gold standard as a basis for international currency exchanges and attempts over the years to find workable alternatives. Bretton Woods produced such an alternative, a standard pegged to the United States dollar — which, paradoxically, was itself tied to the price of gold.  Bretton Woods also produced two key institutions, the International Monetary Fund (IMF) and the International Bank for Reconstruction and Development, now known as the World Bank, designed to provide stability to the new economic order. But the Bretton Woods dollar standard remained in effect only until 1971, when US President Richard Nixon severed by presidential fiat the link between the dollar and gold, allowing currency values to float, as they had done in the 1930s.  In Conway’s view, the demise of Bretton Woods is to be regretted.

* * *

          Keynes was a legendary figure when he arrived at Bretton Woods in July 1944, a “genuine international celebrity, the only household name at Bretton Woods” (p.xv). Educated at Kings College, Cambridge, a member of the faculty of that august institution, and a peer in Britain’s House of Lords, Keynes was also a highly skilled writer and journalist, as well as a fearsome debater.  As a young man, he  established his reputation  with a famous critique of the 1919 Versailles Treaty, The Economic Consequences of the Peace, a tract that predicted with eerie accuracy the breakdown of the financial order that the post World War I treaty envisioned, based upon imposition of punitive reparations upon Germany. Although Keynes dazzled fellow delegates at Bretton Woods with his rhetorical brilliance, he was given to outlandish and provocative statements that hardly helped the bonhomie of the conference.   He suffered a heart attack toward the end of the conference and died less than two years later.

White was a contrast to Keynes in just about every way. He came from a modest first generation Jewish immigrant family from Boston and had to scramble for his education. Unusual for the time, in his 30s White earned an undergraduate degree from Stanford after having spent the better portion of a decade as a social worker. White had a dour personality, with none of Keynes’ flamboyance. Then there were the physical differences.   Keynes stood about six feet six inches tall (approximately 2.0 meters), whereas White was at least a foot smaller (approximately 1.7 meters). But if Keynes was the marquee star of the Bretton Woods because of his personality and reputation, White was its driving force because he represented the United States, undisputedly the conference’s driving force.

By the time of the Bretton Woods conference, however, White was also unduly familiar with Russian intelligence services. Although Conway hesitates to slap the “spy” label on him, there is little doubt that White provided a hefty amount of information to the Soviets, both at the conference and outside its confines. Of course, much of the “information sharing” took place during World War II, when the Soviet Union was allied with Britain and the United States in the fight against Nazi Germany and such sharing was seen in a different light than in the subsequent Cold War era.  One possibility, Conway speculates, was that White was “merely carrying out his own, personal form of diplomacy – unaware that the Soviets were construing this as espionage” (p.159; the Soviet Union attended the conference but did not join the international mechanisms which the conference established).

The reality, Conway concludes, is that we will “never know for certain whether White knowingly betrayed his country by passing information to the Soviets” (p.362).   Critically, there is “no evidence that White’s Soviet activities undermined the Bretton Woods agreement itself” (p.163;). White died in 1948, four years after the conference, and the FBI’s case against him became moot. From that point onward, the question whether White was a spy for the Soviet Union became one almost exclusively for historians, a question that today remains unresolved (ironically, after White’s death, young Congressman Richard Nixon remained just about the only public official still interested in White’s case; when Nixon became president two decades later, he terminated the Bretton Woods financial standards White had helped create).

The conference itself begins at about the book’s halfway point. Prior to his account of its deliberations, Conway shows how the gold standard operated and the search for workable alternatives. In the period up to World War I, the world’s powers guaranteed that they could redeem their currency for its value in gold. The World War I belligerents went off the gold standard so they could print the currency needed to pay for their war costs, causing hyperinflation, as the supply of money overwhelmed the demand.  In the 1920s, countries gradually resorted back to the gold standard.

But the stock market crash of 1929 and ensuing depression prompted countries to again abandon the gold standard. In the 1930s, what Conway terms a “gold exchange standard” prevailed, in which governments undertook competitive devaluations of their currency. President Franklin Roosevelt, for example, used a “primitive scheme” to set the dollar “where he wanted it – which meant as low against the [British] pound as possible” (p.83).  The competitive devaluations and floating rates of the 1930s led to restrictive trade policies, discouraged trade and investment, and encouraged destabilizing speculation, all of which many economists linked to the devastating war that broke out across the globe at the end of the decade.

Bretton Woods sought to eliminate these disruptions for the post-war world by crafting an international monetary system based upon cooperation among the world’s sovereign states. The conference was preceded by nearly two years of negotiations between the Treasury Departments of Great Britain and the United States — essentially exchanges between Keynes and White, each with a plan on how a new international monetary order should operate. Both were “determined to use the conference to safeguard their own economies” (p.18). Keynes wanted to protect not only the British Empire but also London’s place as the center of international finance. White saw little need to protect the empire and foresaw New York as the world’s new economic hub.  He also wanted to locate the two institutions that Bretton Woods would create, the IMF and World Bank, in the United States, whereas Keynes hoped that at least one would be located either in Britain or on the European continent. White and the Americans would win on these and almost all other points of difference.

But Keynes and White shared a broad general vision that Bretton Woods should produce a system designed to do away with the worst effects of both the gold standard and the interwar years of instability and depression.   There needed to be something in between the rigidity associated with the gold standard on the one hand and free-floating currencies, which were “associated with dangerous flows of ‘hot money’ and inescapable lurches in exchange rates” (p.124), on the other. To White and the American delegation, “Bretton Woods needed to look as similar as possible to the gold standard: politicians’ hands should be tied to prevent them from inflating away their debts. It was essential to avoid the threat of the competitive devaluations that had wreaked such havoc in the 1930s” (p.171).  For Keynes and his colleagues, “Bretton Woods should be about ensuring stable world trade – without the rigidity of the gold standard” (p.171).

The British and American delegations met in Atlantic City in June 1944 in an attempt to narrow their differences before travelling to Northern New Hampshire, where the floor would be opened to the conference’s additional delegations.  Much of what happened at Bretton Woods was confined to the business pages of the newspapers, with attention focused on the war effort and President Roosevelt’s re-election bid for a fourth presidential term.  This suited White, who “wanted the conference to look as uncontroversial, technical and boring as possible” (p.203).  The conference was split into three main parts. White chaired Commission I, dealing with the IMF, while Keynes chaired Commission II, whose focus was the World Bank.  Each commission divided into multiple committees and sub-committees.  Commission III, whose formal title was “Other Means of International Cooperation,” was in Conway’s view essentially a “toxic waste dump into which White and Keynes could jettison some of the summit’s trickier issues” (p.216).

The core principle to emerge from the Bretton Woods deliberations was that the world’s currencies, rather than being tied directly to gold or allowed to float, would be pegged to the US dollar which, in turn, was tied to gold at a value of $35 per ounce. Keynes and White anticipated that fixing currencies against the dollar would ensure that:

international trade was protected for exchange rate risk. Nations would determine their own interest rates for purely domestic economic reasons, whereas under the gold standard, rates had been set primarily in order to keep the country’s gold stocks at an acceptable level. Countries would be allowed to devalue their currency if they became uncompetitive – but they would have to notify the International Monetary Fund in advance: this element of international co-ordination was intended to guard against a repeat of the 1930s spiral of competitive devaluation (p.369).


The IMF’s primary purpose under the Bretton Woods framework was to provide relief in balance of payments crises such as those of the 1930s, when countries in deficit were unable to borrow and exporting countries failed to find markets for their goods. “Rather than leaving the market to its own devices – the laissez-faire strategy discredited in the Depression – the Fund would be able to step in and lend countries money, crucially in whichever currency they most needed. So as to avoid the threat of competitive devaluations, the Fund would also arbitrate whether a country could devalue its exchange rate” (p.169).

One of the most sensitive issues in structuring the IMF involved the contributions that each country was required to pay into the Fund, termed “quotas.” When short of reserves, each member state would be entitled to borrow needed foreign currency in amounts determined by the size of its quota.  Most countries wanted to contribute more rather than less, both as a matter of national pride and as a means to gain future leverage with the Fund. Heated quota battles ensued “both publicly in the conference rooms and privately in the hotel corridors, until the very end of the proceedings” (p.222-23), with the United States ultimately determining quota amounts according to a process most delegations considered opaque and secretive.

The World Bank, almost an afterthought at the conference, was to have the power to finance reconstruction in Europe and elsewhere after the war.  But the Marshall Plan, an “extraordinary program of aid devoted to shoring up Europe’s economy” (p.357), upended Bretton Woods’ visions for both institutions for nearly a decade.  It was the Marshall Plan that rebuilt Europe in the post-war years, not the IMF or the World Bank. The Fund’s main role in its initial years, Conway notes, was to funnel money to member countries “as a stop-gap before their Marshall Plan aid arrived” (p.357),

When Harry Truman became President in April 1945 after Roosevelt’s death, he replaced Roosevelt’s Treasury Secretary Henry Morgenthau, White’s boss, with future Supreme Court justice Fred Vinson. Never a fan of White, Vinson diminished his role at Treasury and White left the department in 1947. He died the following year, in August 1948 at age 55.  Although the August 1945 change in British Prime Ministers from Winston Churchill to Clement Atlee did not undermine Keynes to the same extent, his deteriorating health diminished his role after Bretton Woods as well. Keynes died in April 1946 at age 62, shortly after returning to Britain from the inaugural IMF meeting in Savannah, Georgia, his last encounter with White.

Throughout the 1950s, the US dollar assumed a “new degree of hegemony,” becoming “formally equivalent to gold. So when they sought to bolster their foreign exchange reserves to protect them from future crises, foreign governments built up large reserves of dollars” (p.374). But with more dollars in the world economy, the United States found it increasingly difficult to convert them back into gold at the official exchange rate of $35 per ounce.  When Richard Nixon became president in 1969, the United States held $10.5 billion in gold, but foreign governments had $40 billion in dollar reserves, and foreign investors and corporations held another $30 billion. The world’s monetary system had become, once again, an “inverted pyramid of paper money perched on a static stack of gold” and Bretton Woods was “buckling so badly it seemed almost certain to collapse” (p.377).

In a single secluded weekend in 1971 at the Presidential retreat at Camp David, Maryland, Nixon’s advisors fashioned a plan to “close the gold window”: the United States would no longer provide gold to official foreign holders of dollars and instead would impose “aggressive new surcharges and taxes on imports intended to push other countries into revaluing their own currencies” (p.381).  When Nixon agreed to his advisors’ proposal,  the Bretton Woods system, which had “begun with fanfare, an unprecedented series of conferences and the deepest investigation in history into the state of macro-economics” ended overnight, “without almost anyone realizing it” (p.385). The era of fixed exchange rates was over, with currency values henceforth to be determined by “what traders and investors thought they were worth” (p.392).  Since 1971, the world’s monetary system has operated on what Conway describes as an “ad hoc basis, with no particular sense of the direction in which to follow” (p.401).

* * *

            In his epilogue, Conway cites a 2011 Bank of England study that showed that between 1948 and the early 1970s, the world enjoyed a “period of economic growth and stability that has never been rivaled – before or since” (p.388).  In Bretton Woods member states during this period “life expectancy climbed swiftly higher, inequality fell, and social welfare systems were constructed which, for the time being at least, seemed eminently affordable” (p.388).  The “imperfect” and “short-lived” (p.406) system which Keynes and White fashioned at Bretton Woods may not be the full explanation for these developments but it surely contributed.  In the messy world of international economics, that system has “come to represent something hopeful, something closer to perfection” (p.408).  The two men at the center of this captivating story came to Bretton Woods intent upon repairing the world’s economic system and replacing it with something better — something that might avert future economic depressions and the resort to war to settle differences.  “For a time,” Conway concludes, “they succeeded” (p.408).

Thomas H. Peebles

La Châtaigneraie, France

March 8, 2017


Filed under British History, European History, History, United States History, World History

Do Something



Zachary Kaufman, United States Law and Policy on Transitional Justice:

Principles, Politics, and Pragmatics 

             The term “transitional justice” is applied most frequently to “post conflict” situations, where a nation state or region is emerging from some type of war or violent conflict that has given rise to genocide, war crimes, or crimes against humanity — each now a recognized concept under international law, with “mass atrocities” being a common shorthand used to embrace these and related concepts. In United States Law and Policy on Transitional Justice: Principles, Politics, and Pragmatics, Zachary Kaufman, a Senior Fellow and expert on human rights at Harvard University’s Kennedy School of Government, explores the circumstances which have led the United States to support that portion of the transitional justice process that determines how to deal with suspected perpetrators of mass atrocities, and why it chooses a particular means of support (disclosure: Kaufman and I worked together in the US Department of Justice’s overseas assistance unit between 2000 and 2002, although we had different portfolios: Kaufman’s involved Africa and the Middle East, while I handled Central and Eastern Europe).

          Kaufman’s book, adapted from his Oxford University PhD dissertation, centers around case studies of the United States’ role in four major transitional justice situations: Germany and Japan after World War II, and ex-Yugoslavia and Rwanda in the 1990s, after the end of the Cold War. It also looks more briefly at two secondary cases, the 1988 bombing of Pan American flight 103, attributed to Libyan nationals, and atrocities committed during Iraq’s 1990-91 occupation of Kuwait. Making extensive use of internal US government documents, many of which have been declassified, Kaufman digs deeply into the thought processes that informed the United States’ decisions on transnational justice in these six post-conflict situations. Kaufman brings a social science perspective to his work, attempting to tease of out of the case studies general rules about how the United States might act in future transitional justice situations.

          The term “transitional justice” implicitly affirms that a permanent and independent national justice system can and should be created or restored in the post-conflict state.  Kaufman notes at one point that dealing with suspected perpetrators of mass atrocities is just one of several critical tasks involved in creating or restoring a permanent national justice system in a post-conflict state.  Others can include: building or rebuilding sustainable judicial institutions, strengthening the post-conflict state’s legislation, improving capacity of its justice-sector personnel, and creating or upgrading the physical infrastructure needed for a functioning justice system. These latter tasks are not the focus of Kaufman’s work. Moreover, in determining how to deal with alleged perpetrators of mass atrocities, Kaufman’s focus is on the front end of the process: how and why the United States determined to support this portion of the process generally and why it chose particular mechanisms rather than others.   The outcomes that the mechanisms produce, although mentioned briefly, are not his focus either.

          In each of the four primary cases, the United States joined other nations to prosecuted those accused or suspected of involvement in mass atrocities before an international criminal tribunal, which Kaufman characterizes as the “most significant type of transitional justice institution” (p.12). Prosecution before an international tribunal, he notes, can promote stability, the rule of law and accountability, and can serve as a deterrent to future atrocities. But the process can be both slow and expensive, with significant political and legal risks. Kaufman’s work provides a useful reminder that prosecution by an international tribunal is far from the only option available to deal with alleged perpetrators of mass atrocities. Others include trials in other jurisdictions, including those of the post-conflict state, and several non-judicial alternatives: amnesty for those suspected of committing mass atrocities, with or without conditions; “lustration,” where suspected persons are disenfranchised from specific aspects of civic life (e.g., declared ineligible for the civil service or the military); and “doing nothing,” which Kaufman considers tantamount to unconditional amnesty.  Finally, there is the option of summary execution or other punishment, without benefit of trial. These options can be applied in combination, e.g., amnesty for some, trial for others.

         Kaufman weighs two models, “legalism” and “prudentialism,” as potential explanations for why and how the United States acted in the cases under study and is likely to act in the future.  Legalism contends that prosecution before an international tribunal of individuals suspected or accused of mass atrocities  is the only option a liberal democratic state may elect, consistent with its adherence to the rule of law.  In limited cases, amnesty or lustrations may be justified as a supplement to initiating cases before a tribunal. Summary execution may never be justified. Prudentialism is more ad hoc and flexible,with  the question whether to establish or invoke an international criminal tribunal or pursue other options determined by any number of different political, pragmatic and normative considerations, including such geo-political factors as promotion of stability in the post-conflict state and region, the determining state or states’ own national security interests, and the relationships between determining states. Almost by definition, legalism precludes consideration of these factors.

          Kaufman presents his cases in a highly systematic manner, with tight overall organization. An introduction and three initial chapters set forth the conceptual framework for the subsequent case studies, addressing matters like methodology and definitional parameters.  The four major cases are then treated in four separate chapters, each with its own introduction and conclusion, followed by an overall conclusion, also with its own introduction and conclusion (the two secondary cases, Libya and Iraq are treated within the chapter on ex-Yugoslavia).  Substantive headings throughout each chapter make his arguments easy to follow.   General readers may find jarring his extensive use of acronyms throughout the text, drawn from a three-page list contained at the outset. But amidst Kaufman’s deeply analytical exploration of the thinking that lay behind the United States’ actions, readers will appreciate his decidedly non-sociological hypothesis as to why the United States elects to engage in  the transitional justice process: a deeply felt American need in the wake of mass atrocities to “do something” (always in quotation marks).

* * *

          Kaufman begins his case studies with the best-known example of transitional justice, Nazi Germany after World War II. The United States supported creation of what has come to be known as the Nuremberg War Crimes tribunal, a military court administered by the four victorious allies, the United States, Soviet Union, Great Britain and France. The Nuremberg story is so well known, thanks in part to “Judgment at Nuremberg,” the best-selling book and popular film, that most readers will assume that the multi-lateral Nuremberg trials were the only option seriously under consideration at the time. To the contrary, Kaufman demonstrates that such trials were far from the only option on the table.

        For a while the United States seriously considered summary executions of accused Nazi leaders. British Prime Minister Winston Churchill pushed this option during wartime deliberations and, Kaufman indicates, President Roosevelt seemed at times on the cusp of agreeing to it. Equally surprisingly, Soviet Union leader Joseph Stalin lobbied early and hard for a trial process rather than summary executions. The Nuremberg Tribunal “might not have been created without Stalin’s early, constant, and forceful lobbying” (p.89), Kaufman contends.  Roosevelt abandoned his preference for summary executions after economic aspects of the Morgenthau Plan, which involved the “pastoralization” of Germany, were leaked to the press. When the American public “expressed its outrage at treating Germany so harshly through a form of economic sanctions,” Roosevelt concluded that Americans would be “unsupportive of severe treatment for the Germans through summary execution” (p.85).

          But the United States’ support for war crimes trials became unwavering only after Roosevelt died in April 1945 and Harry S. Truman assumed the presidency.  The details and mechanics of a multi-lateral trial process were not worked out until early August 1945 in the “London Agreement,” after Churchill had been voted out of office and Labor Prime Minister Clement Atlee represented Britain. Trials against 22 high level Nazi officials began in November 1945, with verdicts rendered in October 1946: twelve defendants were sentenced to death, seven drew prison sentences, and three were acquitted.

       Many lower level Nazi officials were tried in unilateral prosecutions by one of the allied powers.   Lustration, barring active Nazi party members from major public and private positions, was applied in the US, British, and Soviet sectors.  Numerous high level Nazi officials were allowed to emigrate to the United States to assist in Cold War endeavors, which Kaufman characterizes as a “conditional amnesty” (Nazi war criminals who emigrated to the United States is the subject of Eric Lichtblau’s The Nazis Next Door: How America Became a Safe Haven for Hitler’s Men, reviewed here in October 2015; Frederick Taylor’s Exorcising Hitler: The Occupation and Denazification of Germany, reviewed here in December 2012, addresses more generally the manner in which the Allies dealt with lower level Nazi officials). By 1949, the Cold War between the Soviet Union and the West undermined the allies’ appetite for prosecution, with the Korean War completing the process of diverting the world’s attention away from Nazi war criminals.

          The story behind creation of the International Military Tribunal for the Far East, designed to hold accountable accused Japanese perpetrators of mass atrocities, is far less known than that of Nuremberg, Kaufman observes.  What has come to be known as the “Tokyo Tribunal” largely followed the Nuremberg model, with some modifications. Even though 11 allies were involved, the United States was closer to the sole decision-maker on the options to pursue in Japan than it had been in Germany. As the lead occupier of post-war Japan, the United States had “no choice but to ‘do something’” (p.119).   Only the United States had both the means and will to oversee the post-conflict occupation and administration of Japan. That oversight authority was vested largely in a single individual, General Douglas MacArthur, Supreme Commander of the Allied forces, whose extraordinarily broad – nearly dictatorial — authority in post World War II Japan extended to the transitional justice process. MacArthur approved appointments to the tribunal, signed off on its indictments, and exercised review authority over its decisions.

            In the interest of securing the stability of post-war Japan, the United States accorded unconditional amnesty to Japan’s Emperor Hirohito. The Tokyo Tribunal indicted twenty-eight high-level Japanese officials, but more than fifty were not indicted, and thus also benefited from an unconditional amnesty. This included many suspected of “direct involvement in some of the most horrific crimes of WWII” (p.108), several of whom eventually returned to Japanese politics. Through lustration, more than 200,000 Japanese were removed or barred from public office, either permanently or temporarily.  As in Germany, by the late 1940s the emerging Cold War with the Soviet Union had chilled the United States’ enthusiasm for prosecuting Japanese suspected of war crimes.

           The next major United States engagements in transitional justice arose in the 1990s, when the former Yugoslavia collapsed and the country lapsed into a spasm of ethnic violence; and massive ethnic-based genocide erupted in Rwanda in 1994. By this time, the Soviet Union had itself collapsed and the Cold War was over. In both instances, heavy United States’ involvement in the post-conflict process was attributed in part to a sense of remorse for its lack of involvement in the conflicts themselves and its failure to halt the ethnic violence, resulting in a need to “do something.”  Rwanda marks the only instance among the four primary cases where mass atrocities arose out of an internal conflict.

       The ethnic conflicts in Yugoslavia led to the creation of the International Criminal Tribunal for Yugoslavia (ICTY), based in The Hague and administered under the auspices of the United Nations Security Council. Kaufman provides much useful insight into the thinking behind the United States’ support for the creation of the court and the decision to base it in The Hague as an authorized Security Council institution. His documentation shows that United States officials consistently invoked the Nuremberg experience. The United States supported a multi-lateral tribunal through the Security Council because the council could “obligate all states to honor its mandates, which would be critical to the tribunal’s success” (p.157). The United States saw the ICTY as critical in laying a foundation for regional peace and facilitating reconciliation among competing factions. But it also supported the ICTY and took a lead role in its design to “prevent it from becoming a permanent [tribunal] with global reach” (p.158), which it deemed “potentially problematic” (p.157).

             The United States’ willingness to involve itself in the post-conflict transitional process in Rwanda,   even more than in the ex-Yugoslavia, may be attributed to its failure to intervene during the worst moments of the genocide itself.  That the United States “did not send troops or other assistance to Rwanda perversely may have increased the likelihood of involvement in the immediate aftermath,” Kaufman writes. A “desire to compensate for its foreign policy failures in Rwanda, if not also feelings of guilt over not intervening, apparently motivated at least some [US] officials to support a transitional justice institution for Rwanda” (p.197).

        Once the Rwandan civil war subsided, there was a strong consensus within the international community that some kind of international tribunal was needed to impose accountability upon the most egregious génocidaires; that any such tribunal should operate under the auspices of the United Nations Security Council; that the tribunal should in some sense be modeled after the ICTY; and that the United States shouldtake the lead in establishing the tribunal. The ICTY precedent prompted US officials to “consider carefully the consistency with which they applied transitional justice solutions in different regions; they wanted the international community to view [the US] as treating Africans similarly to Europeans” (p.182). According to these officials, after the precedent of proactive United States involvement in the “arguably less egregious Balkans crisis,” the United States would have found it “politically difficult to justify inaction in post-genocide Rwanda” (p.182).

           The United States favored a tribunal modeled after and structurally similar to the ICTY, which came to be known as International Criminal Tribunal for Rwanda (ICTR). The ICTR was the first international court having competence to “prosecute and punish individuals for egregious crimes committed during an internal conflict” (p.174), a watershed development in international law and transitional justice.  To deal with lower level génocidaires, the Rwandan government and the international community later instituted additional prosecutorial measures, including prosecutions by Rwandan domestic courts and local domestic councils, termed gacaca.

          No international tribunals were created in the two secondary cases, Libya after the 1998 Pan Am flight 103 bombing, and the 1990-91 Iraqi invasion of Kuwait. At the time of the Pam Am bombing, several years prior to the September 11, 2001 attacks, United States officials considered terrorism a matter to be addressed “exclusively in domestic contexts” (p.156).  In the case of the bombing of Pan Am 103, where Americans had been killed, competent courts were available in the United States and the United Kingdom. There were numerous documented cases of Iraqi atrocities against Kuwaiti civilians committed during Iraq’s 1990-91 invasion of Kuwait.  But the 1991 Gulf War, while driving Iraq out of Kuwait, otherwise left Iraqi leader Saddam Hussein in power. The United States was therefore not in a position to impose accountability upon Iraqis for atrocities committed in Kuwait, as it had done after defeating Germany and Japan in World War II.

* * *

         In evaluating the prudentialism and legalism models as ways to explain the United States’ actions in the four primary cases, prudentialism emerges as a clear winner.  Kaufman convincingly demonstrates that the United States in each was open to multiple options and motivated by geo-political and other non-legal considerations.  Indeed, it is difficult to imagine that the United States – or any other state for that matter — would ever, in advance, agree to disregard such considerations, as the legalism model seems to demand. After reflecting upon Kaufman’s analysis, I concluded that legalism might best be understood as more aspirational than empirical, a forward-looking, prescriptive model as to how the United States should act in future transitional justice situations, favored in particular by human rights organizations.

         But Kaufman also shows that the United States’ approach in each of the four cases was not entirely an ad hoc weighing of geo-political and related considerations.  Critical to his analysis are the threads which link the four cases, what he terms “path dependency,” whereby the Nuremberg trial process for Nazi war criminals served as a powerful influence upon the process set up for their Japanese counterparts; the combined Nuremberg-Tokyo experience weighed heavily in the creation of ICTY; and ICTY strongly influenced the structure and procedure of ICTR.   This cumulative experience constitutes another factor in explaining why the United States in the end opted for international criminal tribunals in each of the four cases.

         If a general rule can be extracted from Kaufman’s four primary cases, it might therefore be that an international criminal tribunal has evolved into the “default option” for the United States in transitional justice situations,  showing the strong pull of the only option which the legalism model considers consistent with the rule of law.  But these precedents may exert less hold on US policy makers going forward, as an incoming administration reconsiders the United States’ role in the 21st century global order. Or, to use Kaufman’s apt phrase, there may be less need felt for the United States to “do something” in the wake of future mass atrocities.

Thomas H. Peebles

Venice, Italy

February 10, 2017



Filed under American Politics, United States History

Can’t Forget the Motor City




David Maraniss, Once In a Great City: A Detroit Story

     In 1960, Detroit was the automobile capital of the world, America’s undisputed center of manufacturing, and its fifth most populous city, with that year’s census tallying 1.67 million people. Fifty years later, the city had lost nearly a million people; its population had dropped to 677,000 and it ranked 21st in population among America’s cities in the 2010 census. Then, in 2013, the city reinforced its image as an urban basket case by ignominiously filing for bankruptcy. In Once In a Great City: A Detroit Story, David Maraniss, a native Detroiter of my generation and a highly skilled journalist whose previous works include books on Barack Obama, Bill Clinton and Vince Lombardi, focuses upon Detroit before its precipitous fall, an 18-month period from late 1962 to early 1964.   This was the city’s golden moment, Maraniss writes, when Detroit “seemed to be glowing with promise. . . a time of uncommon possibility and freedom when Detroit created wondrous and lasting things” (p.xii-xiii; in March 2012, I reviewed here two books on post World War II Detroit, under the title “Tales of Two Cities”).

      Detroit produced more cars in this 18 month period than Americans produced babies.  Barry Gordy Jr.’s popular music empire, known officially and affectionately as “Motown,” was selling a new, upbeat pop music sound across the nation and around the world.  Further, at a time when civil rights for African-Americans had become America’s most morally compelling issue, race relations in a city then about one-third black appeared to be as good as anywhere in the United States. With a slew of high-minded officials in the public and private sector dedicated to racial harmony and justice, Detroit sought to present itself as a model for the nation in securing opportunity for all its citizens.

     Maraniss begins his 18-month chronicle with dual events on the same day in November 1962: the burning of an iconic Detroit area memorial to the automobile industry, the Ford Rotunda, a “quintessentially American harmonic convergence of religiosity and consumerism” (p.1-2); and, later that afternoon, a police raid on the Gotham Hotel, once the “cultural and social epicenter of black Detroit” (p.10), but by then considered to be a den of illicit gambling controlled by organized crime groups.  He ends with President Lyndon Johnson’s landmark address in May 1964 on the campus of nearby University of Michigan in Ann Arbor, where Johnson outlined his grandiose vision of the Great Society.  Johnson chose Ann Arbor as the venue to deliver this address in large measure because of its proximity to Detroit. No place seemed “more important to his mission than Detroit,” Maraniss writes, a “great city that honored labor, built cars, made music, promoted civil rights, and helped lift working people into the middle class” (p.360).

     Maraniss’ chronicle unfolds between these bookend events, revolving around on what had attracted President Johnson to the Detroit area in May 1964: building cars, making music, promoting civil rights, and lifting working people into the middle class. He skillfully weaves these strands into an affectionate, deeply researched yet easy-to-read portrait of Detroit during this 18-month golden period.  But Maraniss  does not ignore the fissures, visible to those perceptive enough to recognize them, which would lead to Detroit’s later unraveling.  Detroit may have found the right formula for bringing a middle class life style to working class Americans, black and white alike. But already Detroit was losing population as its white working class was taking advantage of newfound prosperity to leave the city for nearby suburbs.  Moreover, many in Detroit’s black community found the city to be anything but a model of racial harmony.

* * *

     An advertising executive described Detroit in 1963 as “intensely an automobile community – everybody lives, breathes, and sleeps automobiles. It’s like a feudal city ” (p.111). Maraniss’ inside account of Detroit’s automobile industry focuses principally upon the remarkable relationship between Ford Motor Company’s chief executive, Henry Ford II (sometimes referred to as “HF2” or “the Deuce”) and the head of the United Auto Workers, Walter Reuther, during this 18 month golden age (Manariss accords far less attention to the other two members of Detroit’s “Big Three,” General Motors and Chrysler, or to the upstart American Motors Corporation, whose chief executive, George Romney, was elected governor in November 1962 as a Republican). Ford and Reuther could not have been more different.

     Ford, from Detroit’s most famous industrial family, was a graduate of Hotchkiss School and Yale University who had been called home from military service during World War II to run the family business when his father Edsel Ford, then company president, died in 1943. Maraniss mischievously describes the Deuce as having a “touch of the peasant, with his manicured nails and beer gut and . . . frat-boy party demeanor” (p.28). Yet, Ford earnestly sought to modernize a company that he thought had grown too stodgy.  And, early in his tenure, he had famously said, “Labor unions are here to stay” (p.212).

      Reuther was a graduate of the “school of hard knocks,” the son of German immigrants whose father had worked in the West Virginia coalmines.   Reuther himself had worked his way up the automobile assembly line hierarchy to head its powerful union. George Romney once called Reuther the “most dangerous man in Detroit” (p.136). But Reuther prided himself on “pragmatic progressivism over purity, getting things done over making noise. . . [He was] not Marxist but Rooseveltian – in his case meaning as much Eleanor as Franklin” (p.136). Reuther believed that big government was necessary to solve big problems. During the Cold War, he won the support of Democratic presidents by “steering international trade unionists away from communism” (p.138).

     A quarter of a century after the infamous confrontation between Reuther and goons recruited by the Deuce’s grandfather Henry Ford to oppose unionization in the automobile industry — an altercation in which Reuther was seriously injured — the younger Ford’s partnership with Reuther blossomed. Rather than bitter and violent confrontation, the odd couple worked together to lift huge swaths of Detroit’s blue-collar auto workers into the middle class – arguably Detroit’s most significant contribution to American society in the second half of the 20th century. “When considering all that Detroit has meant to America,” Maraniss writes, “it can be said in a profound sense that Detroit gave blue-collar workers a way into the middle class . . . Henry Ford II and Walter Reuther, two giants of the mid-twentieth century, were essential to that result” (p.212).

      Reuther was aware that, despite higher wages and improved benefits, life on the assembly lines remained “tedious and soul sapping if not dehumanizing and dangerous” for autoworkers (p.215). He therefore consistently supported improving leisure time for workers outside the factory.  Music was one longstanding outlet for Detroiters, including its autoworkers. The city’s rich history of gospel, jazz and rhythm and blues musicians gave Detroit an “unmatched creative melody” (p.100), Maraniss observes.   By the early 1960s, Detroit’s musical tradition had become identified with the work of Motown founder, mastermind and chief executive, Berry Gordy Jr.

     Gordy was an ambitious man of “inimitable skills and imagination . . . in assessing talent and figuring out how to make it shine” (p.100).  Gordy aimed to market his Motown sound to white and black listeners alike, transcending the racial confines of the traditional rhythm and blues market. He set up what Maraniss terms a “musical assembly line” that “nurtured freedom through discipline” (p.195) for his many talented performers. The songs which Gordy wrote and championed captured the spirit of working class life: “clear story lines, basic and universal music for all people, focusing on love and heartbreak, work and play, joy and pain” (p.53).

     Gordy’s team included a mind-boggling array of established stars: Mary Wells, Marvin Gaye, Smokey Robinson and his Miracles, Martha Reeves and her Mandelas, Diana Ross and her Supremes, and the twelve-year-old prodigy, Little Stevie Wonder.  Among Gordy’s rising future stars were the Temptations and the Four Tops. The Motown team was never more talented than in the summer of 1963, Maraniss contends. Ten Motown singles rose to Billboard’s Top 10 that year, and eight more to the Top 20.  Wonder, who dropped “Little” before his name in 1963, saw his “Fingertips Part 2” rocket up the charts to No. 1.  Martha and the Vandellas made their mark with “Heat Wave,” a song with “irrepressibly joyous momentum” (p.197).  But the title could have referred equally to the rising intensity of the nationwide quest for racial justice and civil rights for African-Americans that summer.

      In June 1963, nine weeks before the 1963 March on Washington, Maraniss reminds us that Dr. Martin Luther King, Jr. delivered the outlines of his famous “I Have a Dream” speech at the end of a huge Detroit “Walk to Freedom” rally that took place almost exactly 20 years after a devastating racial confrontation between blacks and whites in wartime Detroit. The Walk drew an estimated 100,000 marchers, including a significant if limited number of whites. What King said that June 1963 afternoon, Maraniss writes, was “virtually lost to history, overwhelmed by what was to come, but the first time King dreamed his dream at a large public gathering, he dreamed it in Detroit” (p.182). Concerns about disorderly conduct and violence preceded both the Detroit Walk to Freedom and the March on Washington two months later. Yet, the two  events were for all practical purposes free of violence.  Just as the March on Washington energized King’s non-violent quest for Civil Rights nation-wide, the Walk to Freedom buoyed Detroit’s claim to be a model of racial justice in the urban north.

      In the Walk for Freedom and in the nationwide quest for racial justice, Walter Reuther was an unsung hero. Under Reuther’s leadership, the UAW made an “unequivocal moral and financial commitment to civil rights action and legislation” (p.126).   Once John Kennedy assumed the presidency, Reuther consistently pressed the administration to move on civil rights.  The White House in turn relied on Reuther to serve as a liaison to black civil rights leaders, especially to Dr. King and his southern desegregation campaign. The UAW functioned as what Maraniss  terms the “bank” (p.140) of the Civil Right movement, providing needed funding at critical junctures. To be sure, Maraniss emphasizes, not all rank-and-file UAW members shared Reuther’s passionate commitment to the Walk for Freedom, the March on Washington, or to the cause of civil rights for African-Americans.

     Even within Detroit’s black community, not all leaders supported the Walk for Freedom. Maraniss  provides a close look at the struggle between the Reverend C.L. Franklin and the Reverend Albert Cleage for control over the details of the March for Freedom and, more generally, for control over the direction of the quest for racial justice in Detroit. Reverend Franklin, Detroit’s “flashiest and most entertaining preacher” (p.12; also the father of singer Aretha, who somehow escaped Gordy’s clutches to perform for Columbia Records and later Atlantic), was King’s closest ally in Detroit’s black community. Cleage, whose church later became known as the Shrine of the Black Madonna, founded on the belief that Jesus was black, was not wedded to Dr. King’s brand of non-violence. Cleage sought to limit the influence of Reuther, the UAW and whites generally in the Walk for Freedom. Franklin was able to retain the upper hand in setting the terms and conditions for the June 1963 rally.  But the dispute between Reverends Franklin and Cleage reflected the more fundamental difference between black nationalism and Martin Luther King style integration, and was thus an “early formulation of a dispute that would persist throughout the decade” (p.232),

     In November of 1963, Cleage sponsored a conference that featured black nationalist Malcolm X’s “Message to the Grass Roots,” an important if less well known counterpoint to King’s “I Have A Dream” speech in Washington in August of that year.  In tone and substance, Malcolm’s address “marked a break from the past and laid out a path for the black power movement to follow from then on” (p.279). Malcolm referred in his speech to the highly publicized police killing of prostitute Cynthia Scott the previous summer, which had generated outrage throughout Detroit’s black community and exacerbated long simmering tensions between the community and a police force that was more than 95% white.

     Scott’s killing “discombobulated the dynamics of race in the city. Any communal black and white sensibility resulting from the June 23 [Walk to Freedom] rally had dissipated, and the prevailing feeling was again us versus them” (p.229).  The tension between police and community did not abate when Police Commissioner George Edwards, a long standing liberal who enjoyed strong support within the black community, considered the Scott case carefully and ruled that the shooting was “regrettable and unwise . . . but by the standards of the law it was justified” (p.199).

      Then there was the contentious issue of a proposed Open Housing ordinance that would have forbidden property owners from refusing to sell their property on the basis of race. The proposed ordinance required passage from the city’s nine person City Council, elected at large in a city that was one-third black – no one on the council represented directly the city’s black neighborhoods. The proposal was similar in intent to future national legislation, the Fair Housing Act of 1968, and had the enthusiastic support of Detroit’s progressive Mayor, Jerome Cavanaugh, a youthful Irish Catholic who deliberately cast himself as a mid-western John Kennedy.

      But the proposal evoked bitter opposition from white homeowner associations across the city, revealing the racial fissures within Detroit. “On one side were white homeowner groups who said they were fighting on behalf of individual rights and the sanctity and safety of their neighborhoods. On the other side were African American churches and social groups, white and black religious leaders, and the Detroit Commission on Community Relations, which had been established . . . to try to bridge the racial divide in the city” (p.242).   Notwithstanding the support of the Mayor and leaders like Reuther and Reverend Franklin, white homeowner opposition doomed the proposed ordinance. The City Council rejected the proposal 7-2, a stinging rebuke to the city’s self-image as a model of racial progress and harmony.

       Detroit’s failed bid for the 1968 Olympics was an equally stinging rebuke to the self-image of a city that loved sports as much as music. Detroit bested more glamorous Los Angeles for the right to represent the United States in international competition for the games. A delegation of city leaders, including Governor Romney and Mayor Cavanaugh, traveled to Baden Baden, Germany, where they made a well-received presentation to the International Olympic Committee. While Detroit was making its presentation, the Committee received a letter from an African American resident of Detroit who alluded to the Scott case and the failed Open Housing Ordinance to argue against awarding the games to the city on the ground that fair play “has not become a living part of Detroit” (p.262). Although bookmakers had made Detroit a 2-1 favorite for the 1968 games, the Committee awarded them to Mexico City. Its selection was based largely upon what Maraniss considers Cold War considerations, with Soviet bloc countries voting against Detroit. The delegation dismissed the view that the letter to the Committee might have undermined Detroit’s bid, but its actual effect on the Committee’s decision remains undetermined.

         Maraniss asks whether Detroit might have been able to better contain or even ward off the devastating 1967 riots had it been awarded the 1968 Olympic games. “Unanswerable, but worth pondering” is his response (p.271). In explaining the demise of Detroit, many, myself included, start with the 1967 riots which in a few short but violent days destroyed large swaths of the city, obliterating once solid neighborhoods and accelerating white flight to the suburbs.  But Maraniss emphasizes that white flight was already well underway long before the 1967 disorders. The city’s population had dropped from just under 1.9 million in the 1950 census to 1.67 million in 1960. In January of 1963, Wayne State University demographers published “The Population Revolution in Detroit,” a study which foresaw an even more precipitous emigration of Detroit’s working class in the decades ahead. The Wayne State demographers “predicted a dire future long before it became popular to attribute Detroit’s fall to a grab bag of Rust Belt infirmities, from high labor costs to harsh weather, and before the city staggered from more blows of municipal corruption and incompetence. Before any of that, the forces of deterioration were already set in motion” (p..91). Only a minor story in January 1963, the findings and projections of the Wayne State study in retrospect were of “startling importance and haunting prescience” (p.89).

* * *

      My high school classmates are likely to find Maraniss’ book a nostalgic trip down memory lane: his 18 month period begins with our senior year in a suburban Detroit high school and ends with our freshman college year — our own time of soaring youthful dreams, however unrealistic. But for those readers lacking a direct connection to the book’s time and place, and particularly for those who may still think of Detroit only as an urban basket case, Maraniss provides a useful reminder that it was not always thus.  He nails the point in a powerful sentence: “The automobile, music, labor, civil rights, the middle class – so much of what defines our society and culture can be traced to Detroit, either made there or tested there or strengthened there” (p.xii).  To this, he could have added, borrowing from Martha and the Vandellas’ 1964 hit, “Dancing in the Streets,” that America can’t afford to forget the Motor City.


                   Thomas H. Peebles

Berlin, Germany

October 28, 2016


Filed under American Politics, American Society, United States History