Category Archives: History

High Point of Modern International Economic Diplomacy

Ed Conway, The Summit: Bretton Woods 1944,

J.M. Keynes and the Reshaping of the Global Economy 

               During the first three weeks of July 1944, as World War II raged on the far sides of the Atlantic and Pacific oceans, 730 delegates from 44 countries gathered at the Mount Washington Hotel in Northern New Hampshire for what has come to be known as the Bretton Woods conference. The conference’s objective was audacious: create a new and more stable framework for the post-World War II monetary order, with the hope of avoiding future economic upheavals like the Great Depression of the 1930s.   To this end, the delegates reconsidered and in many cases rewrote some of the most basic rules of international finance and global capitalism, such as how money should flow between sovereign states, how exchange rates should interact, and how central banks should set interest rates. The conference took place at the venerable but aging Mount Washington Hotel, in an area informally known as Bretton Woods, not far from Mount Washington itself, Eastern United States’ highest peak.

In The Summit, Bretton Woods, 1944: J.M. Keynes and the Reshaping of the Global Economy, Ed Conway, formerly economics editor for Britain’s Daily Telegraph and Sunday Telegraph and presently economics editor for Sky News, provides new and fascinating detail about the conference. The word “summit” in his title carries a triple sense: it refers to Mount Washington and to the term that came into use in the following decade for a meeting of international leaders. But Conway also contends that the Bretton Woods conference now appears to have been another sort of summit. The conference marked the “only time countries ever came together to remold the world’s monetary system” (p.xx).  It stands in history as the “very highest point of modern international economic diplomacy” (p.xxv).

Conway differentiates his work from others on Bretton Woods by focusing on the interactions among the delegates and the “sheer human drama” (p.xxii) of the event.  As the sub-title indicates, British economist John Maynard Keynes is forefront among these delegates. Conway could have added to his subtitle the lesser-known Harry Dexter White, Chief International Economist at the US Treasury Department and Deputy to Treasury Secretary Henry Morgenthau, the head of the US delegation and formal president of the conference.  White’s name in the subtitle would have underscored that this book is a story about  the relationship between the two men who assumed de facto leadership of the conference. But the book is also a story about the uneasy relationship at Bretton Woods between the United States and the United Kingdom, the conference’s two lead delegations.

Although allies in the fight against Nazi Germany, the two countries were far from allies at Bretton Woods.  Great Britain, one of the world’s most indebted nations, came to the conference unable to pay for its own defense in the war against Nazi Germany and unable to protect and preserve its vast worldwide empire.  It was utterly outmatched at Bretton Woods by an already dominant United States, its principal creditor, which had little interest in providing debt relief to Britain or helping it maintain an empire. Even the force of Keynes’ dominating personality was insufficient to give Britain much more than a supplicant’s role at Bretton Woods.

Conway’s book also constitutes a useful and understandable historical overview of the international monetary order from pre-World War I days up to Bretton Woods and beyond.  The overview revolves around the gold standard as a basis for international currency exchanges and attempts over the years to find workable alternatives. Bretton Woods produced such an alternative, a standard pegged to the United States dollar — which, paradoxically, was itself tied to the price of gold.  Bretton Woods also produced two key institutions, the International Monetary Fund (IMF) and the International Bank for Reconstruction and Development, now known as the World Bank, designed to provide stability to the new economic order. But the Bretton Woods dollar standard remained in effect only until 1971, when US President Richard Nixon severed by presidential fiat the link between the dollar and gold, allowing currency values to float, as they had done in the 1930s.  In Conway’s view, the demise of Bretton Woods is to be regretted.

* * *

          Keynes was a legendary figure when he arrived at Bretton Woods in July 1944, a “genuine international celebrity, the only household name at Bretton Woods” (p.xv). Educated at Kings College, Cambridge, a member of the faculty of that august institution, and a peer in Britain’s House of Lords, Keynes was also a highly skilled writer and journalist, as well as a fearsome debater.  As a young man, he  established his reputation  with a famous critique of the 1919 Versailles Treaty, The Economic Consequences of the Peace, a tract that predicted with eerie accuracy the breakdown of the financial order that the post World War I treaty envisioned, based upon imposition of punitive reparations upon Germany. Although Keynes dazzled fellow delegates at Bretton Woods with his rhetorical brilliance, he was given to outlandish and provocative statements that hardly helped the bonhomie of the conference.   He suffered a heart attack toward the end of the conference and died less than two years later.

White was a contrast to Keynes in just about every way. He came from a modest first generation Jewish immigrant family from Boston and had to scramble for his education. Unusual for the time, in his 30s White earned an undergraduate degree from Stanford after having spent the better portion of a decade as a social worker. White had a dour personality, with none of Keynes’ flamboyance. Then there were the physical differences.   Keynes stood about six feet six inches tall (approximately 2.0 meters), whereas White was at least a foot smaller (approximately 1.7 meters). But if Keynes was the marquee star of the Bretton Woods because of his personality and reputation, White was its driving force because he represented the United States, undisputedly the conference’s driving force.

By the time of the Bretton Woods conference, however, White was also unduly familiar with Russian intelligence services. Although Conway hesitates to slap the “spy” label on him, there is little doubt that White provided a hefty amount of information to the Soviets, both at the conference and outside its confines. Of course, much of the “information sharing” took place during World War II, when the Soviet Union was allied with Britain and the United States in the fight against Nazi Germany and such sharing was seen in a different light than in the subsequent Cold War era.  One possibility, Conway speculates, was that White was “merely carrying out his own, personal form of diplomacy – unaware that the Soviets were construing this as espionage” (p.159; the Soviet Union attended the conference but did not join the international mechanisms which the conference established).

The reality, Conway concludes, is that we will “never know for certain whether White knowingly betrayed his country by passing information to the Soviets” (p.362).   Critically, there is “no evidence that White’s Soviet activities undermined the Bretton Woods agreement itself” (p.163;). White died in 1948, four years after the conference, and the FBI’s case against him became moot. From that point onward, the question whether White was a spy for the Soviet Union became one almost exclusively for historians, a question that today remains unresolved (ironically, after White’s death, young Congressman Richard Nixon remained just about the only public official still interested in White’s case; when Nixon became president two decades later, he terminated the Bretton Woods financial standards White had helped create).

The conference itself begins at about the book’s halfway point. Prior to his account of its deliberations, Conway shows how the gold standard operated and the search for workable alternatives. In the period up to World War I, the world’s powers guaranteed that they could redeem their currency for its value in gold. The World War I belligerents went off the gold standard so they could print the currency needed to pay for their war costs, causing hyperinflation, as the supply of money overwhelmed the demand.  In the 1920s, countries gradually resorted back to the gold standard.

But the stock market crash of 1929 and ensuing depression prompted countries to again abandon the gold standard. In the 1930s, what Conway terms a “gold exchange standard” prevailed, in which governments undertook competitive devaluations of their currency. President Franklin Roosevelt, for example, used a “primitive scheme” to set the dollar “where he wanted it – which meant as low against the [British] pound as possible” (p.83).  The competitive devaluations and floating rates of the 1930s led to restrictive trade policies, discouraged trade and investment, and encouraged destabilizing speculation, all of which many economists linked to the devastating war that broke out across the globe at the end of the decade.

Bretton Woods sought to eliminate these disruptions for the post-war world by crafting an international monetary system based upon cooperation among the world’s sovereign states. The conference was preceded by nearly two years of negotiations between the Treasury Departments of Great Britain and the United States — essentially exchanges between Keynes and White, each with a plan on how a new international monetary order should operate. Both were “determined to use the conference to safeguard their own economies” (p.18). Keynes wanted to protect not only the British Empire but also London’s place as the center of international finance. White saw little need to protect the empire and foresaw New York as the world’s new economic hub.  He also wanted to locate the two institutions that Bretton Woods would create, the IMF and World Bank, in the United States, whereas Keynes hoped that at least one would be located either in Britain or on the European continent. White and the Americans would win on these and almost all other points of difference.

But Keynes and White shared a broad general vision that Bretton Woods should produce a system designed to do away with the worst effects of both the gold standard and the interwar years of instability and depression.   There needed to be something in between the rigidity associated with the gold standard on the one hand and free-floating currencies, which were “associated with dangerous flows of ‘hot money’ and inescapable lurches in exchange rates” (p.124), on the other. To White and the American delegation, “Bretton Woods needed to look as similar as possible to the gold standard: politicians’ hands should be tied to prevent them from inflating away their debts. It was essential to avoid the threat of the competitive devaluations that had wreaked such havoc in the 1930s” (p.171).  For Keynes and his colleagues, “Bretton Woods should be about ensuring stable world trade – without the rigidity of the gold standard” (p.171).

The British and American delegations met in Atlantic City in June 1944 in an attempt to narrow their differences before travelling to Northern New Hampshire, where the floor would be opened to the conference’s additional delegations.  Much of what happened at Bretton Woods was confined to the business pages of the newspapers, with attention focused on the war effort and President Roosevelt’s re-election bid for a fourth presidential term.  This suited White, who “wanted the conference to look as uncontroversial, technical and boring as possible” (p.203).  The conference was split into three main parts. White chaired Commission I, dealing with the IMF, while Keynes chaired Commission II, whose focus was the World Bank.  Each commission divided into multiple committees and sub-committees.  Commission III, whose formal title was “Other Means of International Cooperation,” was in Conway’s view essentially a “toxic waste dump into which White and Keynes could jettison some of the summit’s trickier issues” (p.216).

The core principle to emerge from the Bretton Woods deliberations was that the world’s currencies, rather than being tied directly to gold or allowed to float, would be pegged to the US dollar which, in turn, was tied to gold at a value of $35 per ounce. Keynes and White anticipated that fixing currencies against the dollar would ensure that:

international trade was protected for exchange rate risk. Nations would determine their own interest rates for purely domestic economic reasons, whereas under the gold standard, rates had been set primarily in order to keep the country’s gold stocks at an acceptable level. Countries would be allowed to devalue their currency if they became uncompetitive – but they would have to notify the International Monetary Fund in advance: this element of international co-ordination was intended to guard against a repeat of the 1930s spiral of competitive devaluation (p.369).

 

The IMF’s primary purpose under the Bretton Woods framework was to provide relief in balance of payments crises such as those of the 1930s, when countries in deficit were unable to borrow and exporting countries failed to find markets for their goods. “Rather than leaving the market to its own devices – the laissez-faire strategy discredited in the Depression – the Fund would be able to step in and lend countries money, crucially in whichever currency they most needed. So as to avoid the threat of competitive devaluations, the Fund would also arbitrate whether a country could devalue its exchange rate” (p.169).

One of the most sensitive issues in structuring the IMF involved the contributions that each country was required to pay into the Fund, termed “quotas.” When short of reserves, each member state would be entitled to borrow needed foreign currency in amounts determined by the size of its quota.  Most countries wanted to contribute more rather than less, both as a matter of national pride and as a means to gain future leverage with the Fund. Heated quota battles ensued “both publicly in the conference rooms and privately in the hotel corridors, until the very end of the proceedings” (p.222-23), with the United States ultimately determining quota amounts according to a process most delegations considered opaque and secretive.

The World Bank, almost an afterthought at the conference, was to have the power to finance reconstruction in Europe and elsewhere after the war.  But the Marshall Plan, an “extraordinary program of aid devoted to shoring up Europe’s economy” (p.357), upended Bretton Woods’ visions for both institutions for nearly a decade.  It was the Marshall Plan that rebuilt Europe in the post-war years, not the IMF or the World Bank. The Fund’s main role in its initial years, Conway notes, was to funnel money to member countries “as a stop-gap before their Marshall Plan aid arrived” (p.357),

When Harry Truman became President in April 1945 after Roosevelt’s death, he replaced Roosevelt’s Treasury Secretary Henry Morgenthau, White’s boss, with future Supreme Court justice Fred Vinson. Never a fan of White, Vinson diminished his role at Treasury and White left the department in 1947. He died the following year, in August 1948 at age 55.  Although the August 1945 change in British Prime Ministers from Winston Churchill to Clement Atlee did not undermine Keynes to the same extent, his deteriorating health diminished his role after Bretton Woods as well. Keynes died in April 1946 at age 62, shortly after returning to Britain from the inaugural IMF meeting in Savannah, Georgia, his last encounter with White.

Throughout the 1950s, the US dollar assumed a “new degree of hegemony,” becoming “formally equivalent to gold. So when they sought to bolster their foreign exchange reserves to protect them from future crises, foreign governments built up large reserves of dollars” (p.374). But with more dollars in the world economy, the United States found it increasingly difficult to convert them back into gold at the official exchange rate of $35 per ounce.  When Richard Nixon became president in 1969, the United States held $10.5 billion in gold, but foreign governments had $40 billion in dollar reserves, and foreign investors and corporations held another $30 billion. The world’s monetary system had become, once again, an “inverted pyramid of paper money perched on a static stack of gold” and Bretton Woods was “buckling so badly it seemed almost certain to collapse” (p.377).

In a single secluded weekend in 1971 at the Presidential retreat at Camp David, Maryland, Nixon’s advisors fashioned a plan to “close the gold window”: the United States would no longer provide gold to official foreign holders of dollars and instead would impose “aggressive new surcharges and taxes on imports intended to push other countries into revaluing their own currencies” (p.381).  When Nixon agreed to his advisors’ proposal,  the Bretton Woods system, which had “begun with fanfare, an unprecedented series of conferences and the deepest investigation in history into the state of macro-economics” ended overnight, “without almost anyone realizing it” (p.385). The era of fixed exchange rates was over, with currency values henceforth to be determined by “what traders and investors thought they were worth” (p.392).  Since 1971, the world’s monetary system has operated on what Conway describes as an “ad hoc basis, with no particular sense of the direction in which to follow” (p.401).

* * *

            In his epilogue, Conway cites a 2011 Bank of England study that showed that between 1948 and the early 1970s, the world enjoyed a “period of economic growth and stability that has never been rivaled – before or since” (p.388).  In Bretton Woods member states during this period “life expectancy climbed swiftly higher, inequality fell, and social welfare systems were constructed which, for the time being at least, seemed eminently affordable” (p.388).  The “imperfect” and “short-lived” (p.406) system which Keynes and White fashioned at Bretton Woods may not be the full explanation for these developments but it surely contributed.  In the messy world of international economics, that system has “come to represent something hopeful, something closer to perfection” (p.408).  The two men at the center of this captivating story came to Bretton Woods intent upon repairing the world’s economic system and replacing it with something better — something that might avert future economic depressions and the resort to war to settle differences.  “For a time,” Conway concludes, “they succeeded” (p.408).

Thomas H. Peebles

La Châtaigneraie, France

March 8, 2017

6 Comments

Filed under British History, European History, History, United States History, World History

Do Something

zachary-1

zachary-2

Zachary Kaufman, United States Law and Policy on Transitional Justice:

Principles, Politics, and Pragmatics 

             The term “transitional justice” is applied most frequently to “post conflict” situations, where a nation state or region is emerging from some type of war or violent conflict that has given rise to genocide, war crimes, or crimes against humanity — each now a recognized concept under international law, with “mass atrocities” being a common shorthand used to embrace these and related concepts. In United States Law and Policy on Transitional Justice: Principles, Politics, and Pragmatics, Zachary Kaufman, a Senior Fellow and expert on human rights at Harvard University’s Kennedy School of Government, explores the circumstances which have led the United States to support that portion of the transitional justice process that determines how to deal with suspected perpetrators of mass atrocities, and why it chooses a particular means of support (disclosure: Kaufman and I worked together in the US Department of Justice’s overseas assistance unit between 2000 and 2002, although we had different portfolios: Kaufman’s involved Africa and the Middle East, while I handled Central and Eastern Europe).

          Kaufman’s book, adapted from his Oxford University PhD dissertation, centers around case studies of the United States’ role in four major transitional justice situations: Germany and Japan after World War II, and ex-Yugoslavia and Rwanda in the 1990s, after the end of the Cold War. It also looks more briefly at two secondary cases, the 1988 bombing of Pan American flight 103, attributed to Libyan nationals, and atrocities committed during Iraq’s 1990-91 occupation of Kuwait. Making extensive use of internal US government documents, many of which have been declassified, Kaufman digs deeply into the thought processes that informed the United States’ decisions on transnational justice in these six post-conflict situations. Kaufman brings a social science perspective to his work, attempting to tease of out of the case studies general rules about how the United States might act in future transitional justice situations.

          The term “transitional justice” implicitly affirms that a permanent and independent national justice system can and should be created or restored in the post-conflict state.  Kaufman notes at one point that dealing with suspected perpetrators of mass atrocities is just one of several critical tasks involved in creating or restoring a permanent national justice system in a post-conflict state.  Others can include: building or rebuilding sustainable judicial institutions, strengthening the post-conflict state’s legislation, improving capacity of its justice-sector personnel, and creating or upgrading the physical infrastructure needed for a functioning justice system. These latter tasks are not the focus of Kaufman’s work. Moreover, in determining how to deal with alleged perpetrators of mass atrocities, Kaufman’s focus is on the front end of the process: how and why the United States determined to support this portion of the process generally and why it chose particular mechanisms rather than others.   The outcomes that the mechanisms produce, although mentioned briefly, are not his focus either.

          In each of the four primary cases, the United States joined other nations to prosecuted those accused or suspected of involvement in mass atrocities before an international criminal tribunal, which Kaufman characterizes as the “most significant type of transitional justice institution” (p.12). Prosecution before an international tribunal, he notes, can promote stability, the rule of law and accountability, and can serve as a deterrent to future atrocities. But the process can be both slow and expensive, with significant political and legal risks. Kaufman’s work provides a useful reminder that prosecution by an international tribunal is far from the only option available to deal with alleged perpetrators of mass atrocities. Others include trials in other jurisdictions, including those of the post-conflict state, and several non-judicial alternatives: amnesty for those suspected of committing mass atrocities, with or without conditions; “lustration,” where suspected persons are disenfranchised from specific aspects of civic life (e.g., declared ineligible for the civil service or the military); and “doing nothing,” which Kaufman considers tantamount to unconditional amnesty.  Finally, there is the option of summary execution or other punishment, without benefit of trial. These options can be applied in combination, e.g., amnesty for some, trial for others.

         Kaufman weighs two models, “legalism” and “prudentialism,” as potential explanations for why and how the United States acted in the cases under study and is likely to act in the future.  Legalism contends that prosecution before an international tribunal of individuals suspected or accused of mass atrocities  is the only option a liberal democratic state may elect, consistent with its adherence to the rule of law.  In limited cases, amnesty or lustrations may be justified as a supplement to initiating cases before a tribunal. Summary execution may never be justified. Prudentialism is more ad hoc and flexible,with  the question whether to establish or invoke an international criminal tribunal or pursue other options determined by any number of different political, pragmatic and normative considerations, including such geo-political factors as promotion of stability in the post-conflict state and region, the determining state or states’ own national security interests, and the relationships between determining states. Almost by definition, legalism precludes consideration of these factors.

          Kaufman presents his cases in a highly systematic manner, with tight overall organization. An introduction and three initial chapters set forth the conceptual framework for the subsequent case studies, addressing matters like methodology and definitional parameters.  The four major cases are then treated in four separate chapters, each with its own introduction and conclusion, followed by an overall conclusion, also with its own introduction and conclusion (the two secondary cases, Libya and Iraq are treated within the chapter on ex-Yugoslavia).  Substantive headings throughout each chapter make his arguments easy to follow.   General readers may find jarring his extensive use of acronyms throughout the text, drawn from a three-page list contained at the outset. But amidst Kaufman’s deeply analytical exploration of the thinking that lay behind the United States’ actions, readers will appreciate his decidedly non-sociological hypothesis as to why the United States elects to engage in  the transitional justice process: a deeply felt American need in the wake of mass atrocities to “do something” (always in quotation marks).

* * *

          Kaufman begins his case studies with the best-known example of transitional justice, Nazi Germany after World War II. The United States supported creation of what has come to be known as the Nuremberg War Crimes tribunal, a military court administered by the four victorious allies, the United States, Soviet Union, Great Britain and France. The Nuremberg story is so well known, thanks in part to “Judgment at Nuremberg,” the best-selling book and popular film, that most readers will assume that the multi-lateral Nuremberg trials were the only option seriously under consideration at the time. To the contrary, Kaufman demonstrates that such trials were far from the only option on the table.

        For a while the United States seriously considered summary executions of accused Nazi leaders. British Prime Minister Winston Churchill pushed this option during wartime deliberations and, Kaufman indicates, President Roosevelt seemed at times on the cusp of agreeing to it. Equally surprisingly, Soviet Union leader Joseph Stalin lobbied early and hard for a trial process rather than summary executions. The Nuremberg Tribunal “might not have been created without Stalin’s early, constant, and forceful lobbying” (p.89), Kaufman contends.  Roosevelt abandoned his preference for summary executions after economic aspects of the Morgenthau Plan, which involved the “pastoralization” of Germany, were leaked to the press. When the American public “expressed its outrage at treating Germany so harshly through a form of economic sanctions,” Roosevelt concluded that Americans would be “unsupportive of severe treatment for the Germans through summary execution” (p.85).

          But the United States’ support for war crimes trials became unwavering only after Roosevelt died in April 1945 and Harry S. Truman assumed the presidency.  The details and mechanics of a multi-lateral trial process were not worked out until early August 1945 in the “London Agreement,” after Churchill had been voted out of office and Labor Prime Minister Clement Atlee represented Britain. Trials against 22 high level Nazi officials began in November 1945, with verdicts rendered in October 1946: twelve defendants were sentenced to death, seven drew prison sentences, and three were acquitted.

       Many lower level Nazi officials were tried in unilateral prosecutions by one of the allied powers.   Lustration, barring active Nazi party members from major public and private positions, was applied in the US, British, and Soviet sectors.  Numerous high level Nazi officials were allowed to emigrate to the United States to assist in Cold War endeavors, which Kaufman characterizes as a “conditional amnesty” (Nazi war criminals who emigrated to the United States is the subject of Eric Lichtblau’s The Nazis Next Door: How America Became a Safe Haven for Hitler’s Men, reviewed here in October 2015; Frederick Taylor’s Exorcising Hitler: The Occupation and Denazification of Germany, reviewed here in December 2012, addresses more generally the manner in which the Allies dealt with lower level Nazi officials). By 1949, the Cold War between the Soviet Union and the West undermined the allies’ appetite for prosecution, with the Korean War completing the process of diverting the world’s attention away from Nazi war criminals.

          The story behind creation of the International Military Tribunal for the Far East, designed to hold accountable accused Japanese perpetrators of mass atrocities, is far less known than that of Nuremberg, Kaufman observes.  What has come to be known as the “Tokyo Tribunal” largely followed the Nuremberg model, with some modifications. Even though 11 allies were involved, the United States was closer to the sole decision-maker on the options to pursue in Japan than it had been in Germany. As the lead occupier of post-war Japan, the United States had “no choice but to ‘do something’” (p.119).   Only the United States had both the means and will to oversee the post-conflict occupation and administration of Japan. That oversight authority was vested largely in a single individual, General Douglas MacArthur, Supreme Commander of the Allied forces, whose extraordinarily broad – nearly dictatorial — authority in post World War II Japan extended to the transitional justice process. MacArthur approved appointments to the tribunal, signed off on its indictments, and exercised review authority over its decisions.

            In the interest of securing the stability of post-war Japan, the United States accorded unconditional amnesty to Japan’s Emperor Hirohito. The Tokyo Tribunal indicted twenty-eight high-level Japanese officials, but more than fifty were not indicted, and thus also benefited from an unconditional amnesty. This included many suspected of “direct involvement in some of the most horrific crimes of WWII” (p.108), several of whom eventually returned to Japanese politics. Through lustration, more than 200,000 Japanese were removed or barred from public office, either permanently or temporarily.  As in Germany, by the late 1940s the emerging Cold War with the Soviet Union had chilled the United States’ enthusiasm for prosecuting Japanese suspected of war crimes.

           The next major United States engagements in transitional justice arose in the 1990s, when the former Yugoslavia collapsed and the country lapsed into a spasm of ethnic violence; and massive ethnic-based genocide erupted in Rwanda in 1994. By this time, the Soviet Union had itself collapsed and the Cold War was over. In both instances, heavy United States’ involvement in the post-conflict process was attributed in part to a sense of remorse for its lack of involvement in the conflicts themselves and its failure to halt the ethnic violence, resulting in a need to “do something.”  Rwanda marks the only instance among the four primary cases where mass atrocities arose out of an internal conflict.

       The ethnic conflicts in Yugoslavia led to the creation of the International Criminal Tribunal for Yugoslavia (ICTY), based in The Hague and administered under the auspices of the United Nations Security Council. Kaufman provides much useful insight into the thinking behind the United States’ support for the creation of the court and the decision to base it in The Hague as an authorized Security Council institution. His documentation shows that United States officials consistently invoked the Nuremberg experience. The United States supported a multi-lateral tribunal through the Security Council because the council could “obligate all states to honor its mandates, which would be critical to the tribunal’s success” (p.157). The United States saw the ICTY as critical in laying a foundation for regional peace and facilitating reconciliation among competing factions. But it also supported the ICTY and took a lead role in its design to “prevent it from becoming a permanent [tribunal] with global reach” (p.158), which it deemed “potentially problematic” (p.157).

             The United States’ willingness to involve itself in the post-conflict transitional process in Rwanda,   even more than in the ex-Yugoslavia, may be attributed to its failure to intervene during the worst moments of the genocide itself.  That the United States “did not send troops or other assistance to Rwanda perversely may have increased the likelihood of involvement in the immediate aftermath,” Kaufman writes. A “desire to compensate for its foreign policy failures in Rwanda, if not also feelings of guilt over not intervening, apparently motivated at least some [US] officials to support a transitional justice institution for Rwanda” (p.197).

        Once the Rwandan civil war subsided, there was a strong consensus within the international community that some kind of international tribunal was needed to impose accountability upon the most egregious génocidaires; that any such tribunal should operate under the auspices of the United Nations Security Council; that the tribunal should in some sense be modeled after the ICTY; and that the United States shouldtake the lead in establishing the tribunal. The ICTY precedent prompted US officials to “consider carefully the consistency with which they applied transitional justice solutions in different regions; they wanted the international community to view [the US] as treating Africans similarly to Europeans” (p.182). According to these officials, after the precedent of proactive United States involvement in the “arguably less egregious Balkans crisis,” the United States would have found it “politically difficult to justify inaction in post-genocide Rwanda” (p.182).

           The United States favored a tribunal modeled after and structurally similar to the ICTY, which came to be known as International Criminal Tribunal for Rwanda (ICTR). The ICTR was the first international court having competence to “prosecute and punish individuals for egregious crimes committed during an internal conflict” (p.174), a watershed development in international law and transitional justice.  To deal with lower level génocidaires, the Rwandan government and the international community later instituted additional prosecutorial measures, including prosecutions by Rwandan domestic courts and local domestic councils, termed gacaca.

          No international tribunals were created in the two secondary cases, Libya after the 1998 Pan Am flight 103 bombing, and the 1990-91 Iraqi invasion of Kuwait. At the time of the Pam Am bombing, several years prior to the September 11, 2001 attacks, United States officials considered terrorism a matter to be addressed “exclusively in domestic contexts” (p.156).  In the case of the bombing of Pan Am 103, where Americans had been killed, competent courts were available in the United States and the United Kingdom. There were numerous documented cases of Iraqi atrocities against Kuwaiti civilians committed during Iraq’s 1990-91 invasion of Kuwait.  But the 1991 Gulf War, while driving Iraq out of Kuwait, otherwise left Iraqi leader Saddam Hussein in power. The United States was therefore not in a position to impose accountability upon Iraqis for atrocities committed in Kuwait, as it had done after defeating Germany and Japan in World War II.

* * *

         In evaluating the prudentialism and legalism models as ways to explain the United States’ actions in the four primary cases, prudentialism emerges as a clear winner.  Kaufman convincingly demonstrates that the United States in each was open to multiple options and motivated by geo-political and other non-legal considerations.  Indeed, it is difficult to imagine that the United States – or any other state for that matter — would ever, in advance, agree to disregard such considerations, as the legalism model seems to demand. After reflecting upon Kaufman’s analysis, I concluded that legalism might best be understood as more aspirational than empirical, a forward-looking, prescriptive model as to how the United States should act in future transitional justice situations, favored in particular by human rights organizations.

         But Kaufman also shows that the United States’ approach in each of the four cases was not entirely an ad hoc weighing of geo-political and related considerations.  Critical to his analysis are the threads which link the four cases, what he terms “path dependency,” whereby the Nuremberg trial process for Nazi war criminals served as a powerful influence upon the process set up for their Japanese counterparts; the combined Nuremberg-Tokyo experience weighed heavily in the creation of ICTY; and ICTY strongly influenced the structure and procedure of ICTR.   This cumulative experience constitutes another factor in explaining why the United States in the end opted for international criminal tribunals in each of the four cases.

         If a general rule can be extracted from Kaufman’s four primary cases, it might therefore be that an international criminal tribunal has evolved into the “default option” for the United States in transitional justice situations,  showing the strong pull of the only option which the legalism model considers consistent with the rule of law.  But these precedents may exert less hold on US policy makers going forward, as an incoming administration reconsiders the United States’ role in the 21st century global order. Or, to use Kaufman’s apt phrase, there may be less need felt for the United States to “do something” in the wake of future mass atrocities.

Thomas H. Peebles

Venice, Italy

February 10, 2017

 

5 Comments

Filed under American Politics, United States History

Reporting From the Front Lines of the Enlightenment

boswell-1

boswell-2

Robert Zaretsky, Boswell’s Enlightenment

           The 18th century Enlightenment was an extraordinary time when religious skepticism rose across Europe and philosophes boldly asserted that man’s capacity for reason was the key to understanding both human nature and the nature of the universe.   In Boswell’s Enlightenment, Robert Zaretsky, Professor of History at the University of Houston, provides a highly personalized view of the Enlightenment as experienced by James Boswell (1740-1795), the faithful Scottish companion to Dr. Samuel Johnson and author of a seminal biography on the learned doctor.  The crux of Zaretsky’s story lies in  Boswell’s tour of the European continent between 1763 and 1765 – the “Grand Tour” – where, as a young man, Boswell encountered seemingly all the period’s leading thinkers, including Jean Jacques Rousseau and François-Marie Arouet, known to history as Voltaire, then Europe’s two best known philosophes. Zaretsky’s self-described purpose is to “place Boswell’s tour of the Continent, and situate the churn of his mind, against the intellectual and political backdrop of the Enlightenment” (p.16-17). Also figuring prominently in Zaretsky’s account are Boswell’s encounters prior to departing for Europe with several leading Scottish luminaries, most notably David Hume, Britain’s best-known religious skeptic. The account further includes the beginning phases of Boswell’s life-long relationship with Johnson, the “most celebrated literary figure in London” (p.71) and, for Boswell, already a “moral and intellectual rock” (p.227).

         But Zaretsky’s title is a delicious double entendre, for his book is simultaneously the intriguing story of Boswell’s personal coming of age in the mid-18th century – his “enlightenment” with a small “e” – amidst the intellectual fervor of his times. The young Boswell searching for himself  was more than a little sycophantic, with an uncommon facility to curry favor with the prominent personalities of his day – an unabashed 18th century celebrity hound.  But Boswell also possessed a fertile, impressionable mind, along with a young man’s zest to experience life in all its facets. Upon leaving for his Grand Tour, moreover, Boswell was already a prolific if not yet entirely polished writer who kept a detailed journal of his travels, much of which survives. In his journal, the introspective Boswell was a “merciless self-critic” (p.97). Yet, Zaretsky writes, Boswell’s ability to re-create conversations and characters in his journals makes him a “remarkable witness to his age” (p.15).  Few individuals “reported in so sustained and thorough a manner as did Boswell from the front lines of the Enlightenment” (p.13).

* * *

        In his prologue, Zaretsky raises the question whether the 18th century Enlightenment should be considered a unified phenomena, centered in France and radiating out from there; or whether it makes more sense to think of separate Enlightenments, such as, for example, both a Scottish and a French Enlightenment. This is a familiar theme to assiduous readers of this blog: in 2013, I reviewed Arthur Hermann’s exuberant claim to a distinct Scottish Enlightenment; and Gertrude Himmelfarb’s more sober argument for distinctive French, English and American Enlightenments. Without answering this always-pertinent question, Zaretsky turns his account to young Boswell’s search for himself and the greatest minds of 18th century Europe.

        Boswell was the son of a prominent Edinburgh judge, Alexander Boswell, Lord Auchinleck, a follower of John Knox’s stern brand of Calvinism and an overriding force in young Boswell’s life. Boswell’s effort to break the grip that his father exerted over his life was also in many senses an attempt to break the grip of his Calvinist upbringing. When as a law student in Edinburgh his son developed what Lord Auchinleck considered a most unhealthy interest in theatre — and women working in the theatre — he sent the wayward son from lively and overly liberal Edinburgh to more subdued Glasgow. There, Boswell came under the influence of renowned professor Adam Smith.  Although his arguments for the advantages of laissez faire capitalism came later, Smith was already a sensation across Europe for his view that empathy, or “fellow feeling,” was the key to understanding what makes human beings good.    A few years later, Lord Auchinleck started his son on his Grand Tour across the European continent by insisting that young Boswell study civil law in the Netherlands, as he had done in his student days.

        Throughout his travels, the young Boswell wrestled with the question of religious faith and how it might be reconciled with the demands of reason. The religious skepticism of Hume, Voltaire, and Rousseau weighed on him.  But, like Johnson, Boswell was not quite ready to buy into it. For Boswell, reason was “not equal to the task of absorbing the reality of our end, this thought of our death. Instead, religion alone offered respite” (p.241). In an age where death was a “constant and dire presence,” Boswell “stands out for his preoccupation, if not obsession, with his mortal end” (p.15). Boswell’s chronic “hypochondria” – the term used in Boswell’s time for depression — was “closely tied to his preoccupation with his mortality” (p.15).  For Boswell, like Johnson, the defense of traditional religion was “less fear of hell than fear of nothingness – what both men called ‘annihilation’” (p.85).

      Boswell’s fear of the annihilation of death probably helps explain his life long fascination with public executions. Throughout the Grand Tour, he consistently went out of his way to attend these very public 18th century spectacles, “transfixed by the ways in which the victims approached their last moments” (p.15). Boswell’s attraction to public executions, whose official justification was to “educate the public on the consequences of crime” was, Zaretsky notes, “exceptional even among his contemporaries” (p.80). But if the young Boswell feared death, he dove deeply into life and, through his journal, shared his dives with posterity.

        A prodigious drinker and carouser, Boswell seduced women across the continent, often the wives of men he was meeting to discuss the profound issues of life and death. At seemingly every stop along the way, moreover, he patronized establishments practicing the world’s oldest profession, with several bouts of gonorrhea resulting from these frequentations, followed by excruciatingly painful medical treatments. Boswell’s multiple encounters with the opposite sex form a colorful portion of his journal and are no small portion of the reason why the journal continues to fascinate readers to this day.

        But Boswell’s first significant encounter with the opposite sex during the Grand Tour was also his first significant encounter on the continent with an Enlightenment luminary, Elisabeth van Tuyell van Serooskerken, whom the young Scot wisely shortened to “Belle.” Boswell met Belle in Utrecht, the Netherlands, his initial stop on the Grand Tour, where he was ostensibly studying civil law. Belle, who went on to write several epistolary novels under her married name, Isabelle de Charrière, was a sophisticated religious skeptic who understood the “social and moral necessity of religion; but she also understood that true skepticism entailed, as Hume believed, a kind of humility and intellectual modesty” (p.127). Belle was not free of religious doubt, Zaretsky notes, but unlike Boswell, was “free of the temptation to seek certainty” (p.127).   Boswell was attracted to Belle’s “lightning” mind, which, as he wrote a friend, “flashes with so much brilliance [that it] may scorch” (p.117). But Belle was not nearly as smitten by Boswell as he was with her, and her father never bothered to pass to his daughter the marriage proposal that Boswell had presented to him. The two parted when Boswell left Utrecht, seeking to put his unrequited love behind him.

        Boswell headed from the Netherlands to German-speaking Prussia and its king, “enlightened despot” Frederick the Great.  Zaretsky considers Frederick “far more despotic than enlightened” (p.143), but Frederick plainly saw the value to the state of religious tolerance. “Here everyone must be allowed to go to heaven in his own way” (p.145) summarized Frederick’s attitude toward religion.  Frederick proved to be one of the era’s few luminaries who was “indifferent to the Scot’s irrepressible efforts at presenting himself to them” (p.141), and Boswell had little direct time with the Prussian monarch during his six month stay.

          But Boswell managed back-to-back visits with Rousseau and Voltaire in Switzerland, his next destination. Rousseau and Voltaire had both been banished from Catholic France for heretical religious views. Rousseau, who was born in Calvinist Geneva,  was no longer welcome in that city either because of his religious views.  Beyond a shared disdain for organized religion, the former friends disagreed about just about everything else — culture and civilization, theater and literature, politics and education.  Zaretsky’s chapter on these visits, entitled “The Distance Between Môtiers and Ferney” – a reference to the remote Swiss locations where, respectively, Rousseau and Voltaire resided — is in my view the book’s best, with an erudite overview of the two men’s wide ranging thinking, their reactions to their impetuous young visitor, and the enmity that separated them.

         Zaretsky describes Rousseau as a “poet of nature” (p.148), for whom religious doctrines led “not to God, but instead to oppression and war” (p.149).   But Rousseau also questioned his era’s advances in learning and the Enlightenment’s belief in human progress. The more science and the arts advanced, Rousseau argued, the more  contemporary society became consumed by personal gain and greed.  Voltaire, the “high priest of the French Enlightenment” (p.12), was a poet, historian and moralist who had fled from France to England in the 1730s because of his heretical religious views. There, he absorbed the thinking of Francis Bacon, John Locke and Isaac Newton, whose pragmatic approach and grounded reason he found superior to the abstract reasoning and metaphysical speculation that he associated with Descartes. While not an original or systematic thinker like Locke or Bacon, Voltaire was an “immensely gifted translator of their work and method” (p.172).

         By the time Boswell arrived in Môtiers, the two philosophes were no longer on speaking terms. Rousseau publicly termed Voltaire a “mountebank” and “impious braggart,” a man of “so much talent put to such vile use” (p.158). Voltaire returned the verbal fire with a string of vitriolic epithets, among them “ridiculous,” “depraved,” “pitiful,” and “abominable.” The clash between the two men went beyond epithets and name-calling. Rousseau publicly identified Voltaire as the author of Oath of the Fifty, a “brutal and hilarious critique of Christian scripture” (p.180). Voltaire, for his part, revealed that Rousseau had fathered five children with his partner Thérèse Levasseur, whom the couple subsequently abandoned.

        The enmity between the two men was not an obstacle to Boswell visiting each, although his actual meetings constitute a minor portion of the engrossing chapter. Boswell had an “improbable” five separate meetings with the usually reclusive Rousseau. They were wide-ranging, with the “resolute and relentless” Boswell pursing “questions great and small, philosophical and personal” (p.156). When Boswell pressed Rousseau on how religious faith could be reconciled with reason, however, Rousseau’s answer was, in essence, that’s for you to figure out. Boswell did not fare much better with Voltaire on how he might reconcile reason with religious faith.

          Unlike Rousseau, Voltaire was no recluse. He prided himself on being the “innkeeper of Europe” (p.174), and his residence at Ferney was usually overflowing with visitors. Despite spending several days at Ferney, Boswell managed a single one-on-one meeting with the man he described as the “Monarch of French Literature” (p.176). In a two-hour conversation that reached what Zaretsky terms “epic proportions” (p.178), the men took up the subject of religious faith. “If ever two men disputed with vehemence we did” (p.178), Boswell  wrote afterwards.  The young traveler wrote eight pages on the encounter in a document separate from his journal.  Alas, these eight pages have been lost to history. But we know that the traveler  left the meeting more than a little disappointed that Voltaire could not provide the definitive resolution he was seeking of how to bridge the chasm between reason and faith.

          After a short stay in Italy that included “ruins and galleries . . .brothels and bawdy houses. . .churches and cathedrals” (p.200), Boswell’s last stop on the Grand Tour was the island of Corsica, a distant and exotic location where few Britons had ever visited.  There, he met General Pasquale Paoli, leader of the movement for Corsican independence from the city-state of Genoa, which exercised control over most of the island. Paoli was already attracting attention throughout Europe for his determination to establish a republican government on the island.  Rousseau, who had been asked to write a constitution for an independent Corsica, wrote for Boswell a letter of introduction to Paoli.  During a six-day visit to the island, Paoli treated the mesmerized Boswell increasingly like a son. Paoli “embodied those ancient values that Boswell most admired, though frequently failed to practice: personal integrity and public authority; intellectual lucidity and stoic responsibility” (p.232). Paoli’s leadership of the independence movement demonstrated to Boswell that heroism was still alive, an “especially crucial quality in an age like his of philosophical and religious doubt” (p.217). Upon returning to Britain, Boswell became a vigorous advocate for Paoli and the cause of Corsican independence.

        Boswell’s tour on the continent ended — and Zaretsky’s narrative ends — with a dramatic flourish that Zaretsky likens to episodes in Henry Fielding’s then popular novel Tom Jones. While Boswell was in Italy, Rousseau and Thérèse were forced to flee Môitiers because of hostile reaction to Voltaire’s revelation about the couple’s five children. By chance, David Hume, who had been in Paris, was able to escort Rousseau into exile in England, leaving Thérèse temporarily behind. Boswell somehow got wind of Thérèse’s situation and, sensing an opportunity to win favor with Rousseau, eagerly accepted her request to escort her to England to join her partner.  But over the course of the 11-day trip to England, Boswell and Thérèse “found themselves sharing the same bed. Inevitably, Boswell recounted his sexual prowess in his journal: ‘My powers were excited and I felt myself vigorous’” (p.225). No less inevitably, Zaretsky notes, Boswell also recorded Thérèse’s “more nuanced response: ‘I allow that you are a hardy and vigorous lover, but you have no art’” (p.225).

* * *

       After following Boswell’s encounters across the continent with many of the period’s most illustrious figures, I was disappointed that Zaretsky does not return to the question he raises initially about nature of 18th century Enlightenment.   It would have been interesting to learn what conclusions, if any, he draws from Boswell’s journey. Does the young Scot’s partaking of the thoughts of Voltaire, Rousseau and others, and his championing the cause of Corsican independence, suggest a single movement indifferent to national and cultural boundaries? Or should Boswell best be considered an emissary of a peculiarly Scottish form of Enlightenment? Or was Boswell himself too young, too impressionable – too full of himself – to allow for any broader conclusions to be drawn from his youthful experiences about the nature of the 18th century Enlightenment? These unanswered questions constitute a missed opportunity in an otherwise engaging account of a young man seeking to make sense of the intellectual currents that were riveting his 18th century world and to apply them in his personal life.

Thomas H. Peebles

Florence, Italy

January 25, 2017

 

5 Comments

Filed under European History, History, Intellectual History, Religion

Stopping History

 

lilla-1

lilla-3

Mark Lilla, The Shipwrecked Mind:

On Political Reaction 

            Mark Lilla is one of today’s most brilliant scholars writing on European and American intellectual history and the history of ideas. A professor of humanities at Columbia University and previously a member of the Committee on Social Thought at the University of Chicago (as well as a native of Detroit!), Lilla first came to public attention in 2001 with his The Reckless Mind: Intellectuals in Politics. This compact work portrayed eight 20th century thinkers who rejected Western liberal democracy and aligned themselves with totalitarian regimes. Some were well known, such as German philosopher and Nazi sympathizer Martin Heidegger, but more were quite obscure to general readers.  He followed with another thought provoking work, The Stillborn God: Religion, Politics, and the Modern West, a study of “political theology,” the implications of secularism and the degree to which religion and politics have been decoupled in modern Europe.

          In his most recent work, The Shipwrecked Mind: On Political Reaction, Lilla probes the elusive and, in his view, understudied mindset of the political reactionary.  The first thing we need to understand about reactionaries, he tells us at the outset, is that they are not conservatives. They are “just as radical as revolutionaries and just as firmly in the grip of historical imaginings” (p.xii).  The mission of the political reactionary is to “stand athwart history, yelling Stop,” Lilla writes, quoting a famous line from the first edition of William F. Buckley’s National Review, a publication which he describes as “reactionary” (p.xiii). But the National Review is widely considered as embodying the voice of traditional American conservatism, an indication that the distinction between political reactionary and traditional conservative is not always clear-cut.  Lilla’s notion of political reaction overlaps with other terms such as “anti-modern” and the frequently used “populism.” He mentions both but does not draw out distinctions between them and political reaction.

            For Lilla, political reactionaries have a heightened sense of doom and maintain a more apocalyptic worldview than traditional conservatives. The political reactionary is driven by a nostalgic vision of an idealized, golden past and is likely to blame “elites” for the deplorable current state of affairs. The betrayal of elites is the “linchpin of every reactionary story” (p.xiii), he notes. In a short introduction, Lilla sets forth these definitional parameters and also traces the origins of our concept of political reaction to a certain type of opposition to the French Revolution and the 18th century Enlightenment.

          The nostalgia for a lost world “settled like a cloud on European thought after the French Revolution and never fully lifted” (p.xvi), Lilla notes. Whereas conservative Edmund Burke recoiled at the French Revolution’s wholesale uprooting of established institutions and its violence but were willing to admit that France’s ancien régime had grown ossified and required modification, quintessential reactionary Joseph de Maistre mounted a full-throated defense of the ancien régime.   For de Maistre, 1789 “marked the end of a glorious journey, not the beginning of one” (p.xii).

         If the reactionary mind has its roots in counter-revolutionary thinking, it endures today in the absence of political revolution of the type that animated de Maistre. “To live a modern life anywhere in the world today, subject to perpetual social and technological change, is to experience the psychological equivalent of permanent revolution,” Lilla writes (p.xiv). For the apocalyptic imagination of the reactionary, “the present, not the past, is a foreign country” (p.137). The reactionary mind is thus a “shipwrecked mind. Where others see the river of time flowing as it always has, the reactionary sees the debris of paradise drifting past his eyes. He is time’s exile” (p.xiii).

      The Shipwrecked Mind is not a systematic or historical treatise on the evolution of political reaction. Rather, in a disparate collection of essays, Lilla provides examples of reactionary thinking.  He divides his work into three main sections, “Thinkers,” “Currents,” and “Events.” “Thinkers” portrays three 20th century intellectuals whose works have inspired modern political reaction. “Currents” consists of two essays with catchy titles, “From Luther to Wal-Mart,” and “From Mao to St. Paul;” the former is a study of “theoconservatism,” reactionary religious strains found within traditional Catholicism, evangelical Protestantism, and neo-Orthodox Judaism; the latter looks at a more leftist nostalgia for a revolutionary past. “Events” contains Lilla’s reflections on the January 2015 terrorist attacks in Paris on the Charlie Hebdo publication and a kosher supermarket.  But like the initial “Thinkers” sections, “Currents” and “Events” are above all introductions to the works of reactionary thinkers, most of whom are likely to be unfamiliar to English language readers.

            The Shipwrecked Mind appeared at about the same time as the startling Brexit vote in the United Kingdom, a time when Donald Trump was in the equally startling process of securing the Republican Party’s nomination for the presidency of the United States. Neither Brexit nor the Trump campaign figures directly in Lilla’s analysis and  readers will therefore have to connect the dots themselves between his diagnosis of political reaction and these events. Contemporary France looms larger in his effort to explain the reactionary mind, in part because Lilla was in Paris at the time of the January 2015 terrorist attacks.

* * *

            “Thinkers,” Lilla’s initial section, is similar in format to The Reckless Mind, consisting of portraits of Leo Strauss, Eric Voeglin, and Franz Rosenzweig, three German-born theorists whose work is “infused with modern nostalgia” (p.xvii). Of the three, readers are most likely to be familiar with Strauss (1899-1973), a Jewish refugee from Germany whose parents died in the Holocaust. Strauss taught philosophy at the University of Chicago from 1949 up to his death in 1973. Assiduous tomsbooks readers will recall my review in January 2014 of The Truth About Leo Strauss: Political Philosophy and American Democracy, by Michael and Catherine Zuckert, which dismissed the purported connection between Strauss and the 2003 Iraq war as based on a failure to dig deeply enough into Strauss’ complex, tension ridden views about America and liberal democracy. Like the Zuckerts, Lilla considers the connection between Strauss and the 2003 Iraq war “misplaced” and “unseemly,” but, more than the Zuckerts, finds “quite real” the connection between Strauss’ thinking and that of today’s American political right (p.62).

        Strauss’ salience to political reaction starts with his view that Machiavelli, whom Strauss considered the first modern philosopher, is responsible for a decisive historical break in the Western philosophical tradition. Machiavelli turned philosophy from “pure contemplation and political prudence toward willful mastery of nature” (p.xviii), thereby introducing passion into political and social life. Strauss’ most influential work, Natural Right and History, argued that “natural justice” is the “standard by which political arrangements must be judged” (p.56). After the tumult of the 1960s, some of Strauss’ American disciples began to see this work as an argument that the West is in crisis, unable to defend itself against internal and external enemies. Lilla suggests that Natural Right and History has been misconstrued in the United States as an argument that political liberalism’s rejection of natural rights leads invariably to a relativism indistinguishable from nihilism. This misinterpretation led “Straussians” to the notion that the United States has a “redemptive historical mission — an idea nowhere articulated by Strauss himself” (p.61).

          Voeglin (1901-1985), a contemporary of Strauss, was born in Germany and raised in Austria, from which he fled in 1938 at the time of its Anchluss with Germany.   Like Strauss, he spent most of his academic career in the United States, where he sought to explain the collapse of democracy and the rise of totalitarianism in terms of a “calamitous break in the history of ideas, after which intellectual and political decline set in” (p.xviii). Voeglin argued that in inspiring the liberation of politics from religion, the 18th century Enlightenment gave rise in the 20th century to mass ideological movements such as Marxism, fascism and nationalism.  Voeglin considered these movements “’political religions,’ complete with prophets, priests, and temple sacrifices” (p.31). As Lilla puts it, for Voeglin, when you abandon the Lord, it is “only a matter of time before you start worshipping a Führer” (p.31).

        Rosenzweig (1886-1929) was a German Jew who gained fame in his time for backing off at the last moment from a conversion to Christianity – the equivalent of leaving his bride at the altar – and went on to dedicate his life to a revitalization of Jewish thought and practice. Rosenzweig shared an intellectual nostalgia prevalent in pre-World War I Germany that saw the political unification of Germany decades earlier, while giving rise to a wealthy bourgeois culture and the triumph of the modern scientific spirit, as having extinguished something essential that could “only be recaptured through some sort of religious leap.” (p.4). Rosenzweig rejected Judaism’s efforts to reform itself “according to modern notions of historical progress, which were rooted in Christianity” in favor of a new form of thinking that would “turn its back on history in order to recapture the vital transcendent essence of Judaism” (p.xvii-xviii).

          Lilla’s sensitivity to the interaction between religion and politics, the subject of The Stillborn God and the portraits of Voeglin and Rosenzweig here, is again on display in the two essays in the middle “Currents” section. In “From Luther to Wal-Mart,” Lilla explores how, despite doctrinal differences, traditional Catholicism, evangelical Protestantism, and neo-Orthodox Judaism in the United States came to share a “sweeping condemnation of America’s cultural decline and decadence.”  This “theoconservatism” (p.xix) blames today’s perceived decline and decadence on reform movements within these dominations and what they perceive as secular attacks on religion generally, frequently tracing the attacks to the turbulent 1960s as the significant breaking point in American political and religious history.

         Two works figure prominently in this section, Alastir MacInytre’s 1981 After Virtue, and Brad Gregory’s 2012 The Unintended Reformation. MacIntyre, echoing de Maistre, argued that the Enlightenment had undone a system of morality worked out over centuries, unwittingly preparing the way for “acquisitive capitalism, Nietzscheanism, and the relativistic liberal emotivism we live with today, in a society that that ‘cannot hope to achieve moral consensus’” (p.74-75). Gregory, inspired by MacIntyre, attributed contemporary decline and decadence in significant part to forces unleashed in the Reformation, undercutting the orderliness and certainty of “medieval Christianity,” his term for pre-Reformation Catholicism. Building on Luther and Calvin, Reformation radicals “denied the need for sacraments or relics,” and left believers unequipped to interpret the Bible on their own, leading to widespread religious conflict. Modern liberalism ended these conflicts but left us with the “hyper-pluralistic, consumer-driven, dogmatically relativististic world of today. And that’s how we got from Luther to Walmart” (p.78-79).

        “From St. Paul to Mao” considers a “small but intriguing movement on the academic far left” which maintains a paradoxical nostalgia for “revolution” or “the future,” and sees “deep affinities” between Saint Paul and modern revolutionaries such as Lenin and Chairman Mao (p.xx).  Jacob Taubes, a peripatetic Swiss-born Jew who taught in New York, Berlin, Jerusalem and Paris, sought to demonstrate in The Political Teachings of Paul that Paul was a “distinctively Jewish fanatic sent to universalize the Bible’s hope of redemption, bringing this revolutionary new idea to the wider world. After Moses, there was never a better Jew than Paul” (p.90). French theorist Alain Badiou, among academia’s last surviving Maoists, argued that Paul was to Jesus as Lenin was to Marx. The far left academic movement’s most prominent theorist is Nazi legal scholar Carl Schmitt, Hitler’s “crown jurist” (p.99), a thinker portrayed in The Reckless Mind who emphasized the importance of human capacity and will rather than principles of natural right in organizing society.

         The third section, “Currents,” considers  France’s simmering cultural war over the place of Islam in French society, particularly in the aftermath of the January 2015 terrorist attacks in Paris, which Lilla sees as a head-on collision between two forms of political reaction:

On the one side was the nostalgia of the poorly educated killers for an imagined, glorious Muslim past that now inspires dreams of a modern caliphate with global ambitions. On the other was the nostalgia of French intellectuals who saw in the crime a confirmation of their own fatalistic views about the decline of France and the incapacity of Europe to assert itself in the face of a civilizational challenge (p.xx).

        France’s struggle to integrate its Muslim population, Lilla argues, has revived a tradition of cultural despair and nostalgia for a Catholic monarchist past that had flourished in France between the 1789 Revolution and the fall of France in 1940, but fell out of favor after World War II because of its association with the Vichy government and France’s role in the Holocaust. In the early post-war decades in France, it was “permissible for a French writer to be a conservative but not a reactionary, and certainly not a reactionary with a theory of history that condemned what everyone else considered to be modern progress” (p.108). Today, it is once again permissible in France to be a reactionary.

          “Currents” concentrates on two best-selling works that manifest the revival of the French reactionary tradition, Éric Zemmour’s Le Suicide francais, published in 2014, and Michel Houellebecq’s dystopian novel, Submission, first published on the very day of the January 2015 Charlie Hebdo attacks, an “astonishing, almost unimaginable” coincidence (p.116). Le Suicide francais presents a “grandiose, apocalyptic vision of the decline of France” (p.108), with a broad range of culprits contributing to the decline, including feminism, multiculturalism, French business elites, and European Union bureaucrats. But Zemmour reserves particular contempt for France’s Muslim citizens.  Le Suicide francais provides the French right with a “common set of enemies,” stirring an “outraged hopelessness – which in contemporary politics is much more powerful than hope” (p.117).

         Submission is the story of an election in France of a Muslim President in 2022, with the support of France’s mainstream political parties which seek to prevent the far right National Front party from winning the presidency.  In Lilla’s interpretation, the novel serves to express a “recurring European worry that the single-minded pursuit of freedom – freedom from tradition and authority, freedom to pursue one’s own ends – must inevitably lead to disaster” (p.127).  France for Houellebecq “regrettably and irretrievably, lost its sense of self” as a result of wager on history made at the time of the Enlightenment that the more Europeans “extended human freedom, the happier they would be” (p.128-29). For Houellebecq, “by any measure France’s most significant contemporary writer” (p.109), that wager has been lost. “And so the continent is adrift and susceptible to a much older temptation, to submit to those claiming to speak for God”(p.129).

          Lilla’s section on France ends on this ominous note. But in an “Afterword,” Lilla returns to contemporary Islam, the other party to the head-on collision of competing reactionaries at work in the January 2015 terrorist attacks in Paris and their aftermath.  Islam’s belief in a lost Godden Age is the “most potent and consequential” political nostalgia in operation today (p.140), Lilla contends. According to radical Islamic myth, out of a state of jahiliyya, ignorance and chaos, the Prophet Muhammad was “chosen as the vessel of God’s final revelation, which uplifted all individuals and peoples who accepted it.” But, “astonishingly soon, the élan of this founding generation was lost. And it has never been recovered” (p.140). Today the forces of secularism, individualism, and materialism have “combined to bring about a new jahiliyya that every faithful Muslim must struggle against, just as the Prophet did at the dawn of the seventh century” (p.141).

* * *

          The essays in this collection add up to what Lilla describes as a “modest start” (p.xv) in probing  the reactionary mindset and are intriguing as far as they go. But I finished The Shipwrecked Mind hoping that Lilla will extend this modest start. Utilizing his extensive learning and formidable analytical skills, Lilla is ideally equipped to provide a systematic, historical overview of the reactionary tradition, an overview that would highlight its relationship to the French Revolution and the 18th century Enlightenment in particular but to other historical landmarks as well, especially the 1960s. In such a work, Lilla might also provide more definitional rigor to the term “political reactionary” than he does here, elaborating upon its relationship to traditional conservatism, populism, and anti-modernism.  Through what might be a separate work, Lilla is also well placed to help us connect the dots between political reaction and the turmoil generated by Brexit and the election of Donald Trump.  In less than six months, moreover, we will also know whether we will need to ask Lilla to connect dots between his sound discussion here of political reaction in contemporary France and a National Front presidency.

 

Thomas H. Peebles

La Châtaigneraie, France

January 5, 2017

 

 

,

 

6 Comments

Filed under Intellectual History, Political Theory, Religion

New Form of Dominion

africalooting-1

africalooting-2

Tom Burgis, The Looting Machine:
Warlords, Oligarchs, Corporations, Smugglers, and the Theft of Africa’s Wealth

     Sub-Saharan Africa today is awash in the critical natural resources that fuel what we term the modern way of life. It is the repository of 15% of the planet’s crude oil reserves, 40% of its gold, and 80% of its platinum, along with the world’s richest diamond mines and significant deposits of uranium, copper, iron ore, and bauxite, the ore that is refined to make aluminum. Yet, the immense wealth that these resources produce is all too often siphoned off at the top of African states, with little positive effect for everyday citizens of those states. The more the country is rich in natural resources, the poorer are its people, or so it seems. This is what Tom Burgis, an investigative journalist for the Financial Times, terms the “resource curse” in his passionately argued indictment of Africa’s ruling elites and their cohorts, The Looting Machine: Warlords, Oligarchs, Corporations, Smugglers, and the Theft of Africa’s Wealth.

     The resource curse enables rulers of resource dependent states to “govern without recourse to popular consent,” Burgis contends. “Instead of calling their rulers to account, the citizens of resource states are reduced to angling for a share of the loot. This creates an ideal fiscal system for supporting autocrats” (p.73-74). Resource dependent states are thus “hard-wired for corruption. Kleptocracy, or government by thieves, thrives” (p.5). The resource curse is not unique to Africa, Burgis emphasizes, but it is “at its most virulent on the continent that is at once the world’s poorest and, arguably, its richest” (p.5). Once dominated by colonial European powers and subsequently by Cold War superpowers, sub-Saharan Africa today is subject to what Burgis terms a “new form of dominion . . . controlled not by nations but by alliances of unaccountable African rulers governing through shadow states, middlemen who connect them to the global resource economy, and multinational companies from the West and the East that cloak their corruption in corporate secrecy” (p.244).

     The resource curse gives rise to what Burgis terms “looting machines.” In a series of case studies, he demonstrates looting machines in action in several African states. He devotes most attention to Angola, Nigeria and the Democratic Republic of Congo, but also provides examples of the siphoning of state resources in numerous other African states, including Botswana, Ghana, Guinea, Madagascar, Niger, Sierra Leone, South Africa, and Zimbabwe. Burgis’ case studies delve deeply into the highly complex and often-opaque transactions typical of looting machines across the continent.  Some readers may find these portions of his case studies overly detailed and slow going. But the studies also feature  warm portraits of individual Africans affected by the continent’s looting machines.

     Looting machines can work, Burgis explains, only when they are “plugged into international markets for oil and minerals. For that, Africa’s despots need allies in the resource industry” (p.107). Major Western international corporations still play a significant role in Africa, continuing a presence that often dates back to the colonial period. There is also no shortage of middlemen working to put African rulers and states together with international buyers, including several colorful characters portrayed here. Today’s middlemen often have a relationship to China and Chinese enterprises, now the major source of competition for Western corporations across the continent. Burgis’ insights into how Chinese connections abet the resource curse in Africa are among his most valuable contributions to our understanding of 21st century Africa.

* * *

     China has reshaped Africa’s economy through cheap loans to fund infrastructure building in resource dependent states, to be built by Chinese companies and repaid in oil or minerals — “infrastructure without interference,” as Burgis puts it, a “genuinely new bargain” (p.133). China builds roads, ports and refineries “on a scale scarcely countenanced by the European colonizers or the cold warriors. In exchange it [has] sought not allegiance to a creed so much as access to oil, minerals, and markets” (p.133-34). Burgis reveals numerous instances where Chinese firms receive natural resources, whether minerals or petroleum, for far less than the fair market price, often with kickbacks to individual local leaders. Swapping infrastructure and cheap credit for natural resources permits China to buy its way into “established Western companies that have long profited from the continent’s oil and minerals” (p.143).

     Burgis begins with a case study of oil rich Angola, to which he returns frequently throughout the book. Africa’s third largest economy, after Nigeria and South Africa, Angola is also the continent’s second largest exporter of oil after Nigeria. Following independence from Portugal in 1975, the country was shattered by Cold War proxy wars between factions sponsored by the Soviet Union and the United States. When the wars ended, political and economic power devolved to the Futungo, a collection of Angola’s most powerful families, which embarked on the “privatization of power,” using Sonangol, Angola’s sate-owned oil company. An Angolan expert termed Sonangol a “shadow government controlled and manipulated by the [Angolan] presidency” (p.11). Sonangol awarded itself stakes in oil ventures operated by foreign companies, using the revenues to “push its tentacles into every corner of the domestic economy: property, health care, banking, aviation” (p.11), even a professional football team.

     Burgis uncovered in Angola a pattern that repeats itself in other African resource-dependent states, in which owners of front companies, concealed behind layers of corporate secrecy, are the “very officials who influence or control the granting of rights to oil and mining prospects and who are seeking to turn that influence into a share of the profits” (p.15). In a deal between Sonagol and a Texas-based oil and exploration company, Cobalt International Energy, Sonangol insisted upon including an unknown local company as junior partner, Nazaki Oil and Gáz, ostensibly to help Angolans gain a foothold in an industry that provides almost all the country’s export revenue but accounts for barely one per cent of its jobs. By its own account, Cobalt went ahead with the deal “without knowing the true identify of its partner, a company with no track record in the industry and registered to an address on a Luanda backstreet that [Burgis] found impossible to locate when [he] went looking for it in 2012” (p.17).

     Burgis’ own investigation revealed that three of Angola’s most powerful men held concealed stakes in Nazaki, including Sonagol’s CEO and the head of the president’s security detail. Nazaki’s involvement, an Angolan anti-corruption activist found, revealed a system of plunder in which the “spoils of power in Angola are shared by the few, while the many remain poor” (p.16). An audit of Angola’s national accounts conducted by the International Monetary Fund in 2011 estimated that between 2007 and 2010, $32 billion in Sonangol’s oil revenues should have gone to the state treasury but instead had “gone missing” (p.12), most of which could be traced to Sonangol’s off-the-books spending.

     Nigeria, the continent’s most populous state, is also its largest oil producer and perhaps its most corrupt, although there are plenty of candidates for that distinction. Nigeria has been “hallowed out by corruption that has fattened a ruling class of stupendous wealth while most of the rest lack the means to fill their stomachs, treat their ailments, or educate their children” (p.63), Burgis contends.  He  uses Nigeria to illustrate “Dutch Disease,” a term which The Economist coined in 1977 to describe the after effects in the northern Netherlands when Royal Dutch Shell and Exxon discovered Europe’s largest national gas field. A gas bonanza followed, but people outside the energy industry began losing their jobs and other sectors of the economy slumped. Although the Netherlands had strong institutions that enabled it to withstand Dutch Disease, throughout Africa the disease has been a “pandemic,” with symptoms that “include poverty and oppression” (p.70).

     Dutch Disease “enters a country through its currency,” Burgis explains. The dollars that pay for petroleum, minerals, ores or gems “push up the value of the local currency. Imports become cheaper relative to locally made products, undercutting homegrown enterprises. Arable land lies fallow as local farmers find that imported fare has displaced their produce” (p.70). Dutch disease stymies the possibility of industrialization within the country. As oil and minerals leave, their value accrues elsewhere and a cycle of “economic addiction” sets in: opportunity becomes “confined to the resource business, but only for the few . . . Instead of broad economies with an industrial base to provide mass employment, poverty breeds and the resource sector becomes an enclave of plenty for those who control it” (p.70). Northern Nigeria’s once thriving textile industry has now all but disappeared, creating new demand for imported clothes and fabrics. The omnipresence of Chinese goods at public markets testifies to Nigeria’s “near-total failure to develop a strong manufacturing sector of its own” (p.72).

      Today, an immense network of political patronage sustains Nigeria’s petro-kleptocracy. That network propelled a once-obscure geologist, Goodluck Jonathan, to the presidency. Jonathan became governor of his home state of Baylsea, then vice-president under President Umaru Yar’Adua. After Yar’Adua died in office in 2010, Jonathan acceded to the presidency. When he sought the People’s Democratic Party’s nomination for president in his own right in 2011, party leaders beat back a challenger with $7000 payments to a sufficient number of the party’s 3,400 delegates to assure Jonathan’s nomination. $7000 represents roughly five times the average Nigerian’s annual income, Burgis points out. Jonathan served as president from 2011 to 2015 when, in a campaign where state corruption was a major issue, he became the first Nigerian president to be voted out of office.

     On Jonathan’s watch, “jaw dropping” quantities (p.205) of approximately $60 billion in annual Nigerian oil revenue were unaccounted for each year.  Meanwhile, the visibility of the Islamic terrorist group Boko Haram increased, including its kidnapping of nearly 300 school girls. For Boko Haram, the corruption of Nigeria’s ruling class and the lack of economic opportunities in the country serve as “recruiting sergeants” (p.79). Oil has “sickened Nigeria’s heart” (p.71), Burgis plaintively concludes, turning a country of immense potential into a “sorry mess” (p.75).

     Whereas the Angolan and Nigerian economies turn around oil, a mind-boggling array of mineral resources may be found in the Democratic Republic of Congo, DRC. Its untapped deposits of raw minerals are estimated to be worth in excess of $24 trillion. The DRC has 70% of the world’s coltan, a third of its cobalt, more than 30% of its diamond reserves, and a tenth of its copper. Coltan is critical to the manufacture of a wide variety of electronics products, such as mobile phones and laptop computers. Cobalt, a by-product of copper, is used to make the ultra strong alloys that are integral to turbines and jet engines. Such richesse in Burgis’ view gives Congo the dubious distinction of being the world’s richest resource country with the planet’s poorest people, “significantly worse off than other destitute Africans” (p.30). Civil wars over control of Congo’s minerals continue to this day.

     Congo’s current president, Joseph Kabila, is the son of Laurent Kabila, who was installed as president in 1997 with assistance from Tutsi génocidaires in a spill over from neighboring Rwanda’s ethnic wars. After a bodyguard shot his father in 2001, the younger Kabila became president in the midst of Congo’s civil wars. Burgis documents how middleman Dan Gertler, an Israeli national whose grandfather was a founder of Israel’s diamond exchange, played a key role in securing Kabila’s hold on power. In exchange for a monopoly contract to all diamonds mined in the Congo, Gertler provided Kabila with $20 million to fund his defense in its civil wars. When international pressure prompted the cancellation of his monopoly diamond contract, Gertler turned to Congo’s cooper and cobalt production, helping build a “tangled corporate web through which companies linked to him have made sensational profits through sell-offs of some of Congo’s most valuable mining assets” (p.49).

     Gertler set up what Burgis terms a series of “fiendishly complicated” transactions, involving “multiple interlinked sales conducted through offshore vehicles registered in tax havens where all but the most basic company information is secret” (p.50). Most commonly, a cooper or cobalt mine owned by the Congolese state or rights to a virgin deposit is sold, “sometimes in complete secrecy, to a company controlled by or linked to Gertler’s offshore network for a price far below what it is worth” (p.50). Then all or part of that asset is sold at a profit to foreign mining companies, among them some of the biggest groups on the London Stock Exchange. Even by the mining industry’s bewildering standards, Burgis contends, the structure of Gertler’s Congo deals is “labyrinthine” (p.50).

     In one case, the Congolese state sold rights to a “juicy copper prospect” (p.51) for $15 million to a private company, which immediately sold the same rights for $75 million – a $60 million loss for the state and a $60 million profit for Gertler. Former UN General Secretary Kofi Anan’s Africa Progress Panel estimated that the Congolese state lost $1.36 billion between 2000 and 2012 from this and related deals. Yet, Burgis cautions, “[s]o porous is Congo’s treasury that there is no guarantee that, had they ended up there, these revenues would have been spent on schools and hospitals and other worthwhile endeavors; indeed, government income from resource rent has a tendency to add to misrule, absolving rulers of the need to convince electorates to pay taxes” (p.52-53). In the absence in Congo of “anything resembling a functioning state,” Burgis concludes woefully, an “ever-shifting array of armed groups continues to profit from lawlessness, burrowing for minerals and preying on a population that. . . is condemned to suffer in the midst of plenty” (p.34).

     Overshadowing Gertler as a middleman and dealmaker throughout Burgis’ case studies is the ubiquitous Chinese national Sam Pa, a mysterious man whose work is associated with the Queensway Group. Queensway, a shadowy organization based in Hong Kong, is a loose confederation of groups, most prominent among the infrastructure building organization China International Fund or CIF. Seemingly independent of the Chinese government, CIF is closely linked to major Chinese construction firms. Across Africa, Pa, Queensway and CIF offered “pariah governments” a “ready-made technique for turning their countries’ natural resources into cash when few others are prepared to do business with them” (p.146-47).

      After Guinea’s ruling junta had ruthlessly stamped out an opposition political rally and faced “financial asphyxiation through the [international] sanctions that followed the massacre,” Pa and CIF threw a “lifeline” to the junta by funding $7 billion in mining, energy and infrastructure projects (p.119). Pa and CIF supported coups in Madagascar and Niger with multi-million dollar loans, and may have paid as much as $100 million to Robert Mugabe’s notoriously brutal security forces in Zimbabwe in exchange for diamond mining rights. Pa and CIF also maintained extensive links to Angola’s Futungo and Sonangol. As Burgis’ story ends, Pa mysteriously disappeared, apparently abducted at a Beijing hotel by communist party operatives, with the future of Queensway and CIF appearing uncertain.

* * *

     Like many exposés, Burgis’ book is longer on highlighting a problem than on providing solutions. But kleptocracy has been the subject of increasing international attention, with measures available to counter some of its manifestations. The United States has used the Foreign Corrupt Practices Act (FCPA) to prosecute some forms of kleptocracy and siphoning of natural resources. This statute makes it a crime for a company with connections to the United States to pay or offer money or anything else of value to foreign officials to win business. The Texas firm Cobalt International Energy, which contracted with Sonangol in Angola, was investigated by US authorities under the FCPA.  Money laundering prosecutions and asset forfeiture procedures also provide potential tools to mitigate some of the effects of kleptocracy.

     The United Nations and the World Bank support the “Stolen Assets Recovery Initaitive” (StAR), an international network to facilitate recovery of stolen assets and the laundering of the proceeds generated by Africa’s looting machines.  The United States government has also launched its own Kleptocracy Asset Recovery Initiative to support recovery of assets within the United States that are the result of illegal conduct overseas. Of course, many of the deals that Burgis describes, while siphoning a country’s resources, are nonetheless legal under that country’s laws. Further, international donors, including the World Bank and my former office at the US Department of Justice, provide anti-corruption assistance to individual countries to create or strengthen internal anti-corruption institutions and build capacity to prosecute and adjudicate corruption cases.  To be effective, such assistance requires “political will,” the support of the host country, a quality likely to be lacking in the cases Burgis treats.

* * *

     Burgis reminds readers in his conclusion that those who fuel Africa’s looting machines – warlords, oligarchs, corporations and smugglers — all “profit from the natural wealth whose curse sickens the lives of hundreds of millions of Africans” (p.244). More than a searing indictment of African leaders and their cohorts, Burgis’ work is also a heartfelt plea on behalf of average African citizens, the victims of the continent’s resource curse.

Thomas H. Peebles
La Châtaigneraie, France
December 13, 2016

8 Comments

Filed under Politics, World History

Formidable Thinker, Reluctant Politician, President of Two Countries

havel-1

havel-2

Michael Žantovsky, Havel: A Life

     In our time of rising xenophobia, ethnic nationalism and raging populism, Václav Havel, if he is remembered at all, seems anachronistic, a quaint figure from a bygone era. The first president of post-communist Czechoslovakia, Havel (1936-2011) was elected during the “Velvet Revolution” in December 1989, barely a month after the fall of the Berlin Wall and just days after Soviet control of Czechoslovakia collapsed.  After the 1992 “Velvet Divorce” split the country into the Czech Republic and Slovakia, Havel served as Czech Republic president from 1993 to 2003.

     Although both Czechoslovakia’s last president and the Czech Republic’s first, Havel was more than just president of two countries. He was also a towering moral symbol in Eastern Europe’s remarkable transition to democracy in the 1990s after decades of communist rule.  Michael Zantovsky demonstrates in his engaging biography, Havel: A Life, how Havel was instrumental in bringing about the demise of communism in Eastern Europe, “one of the most dramatic social transitions of recent history” (p.1). As president of two countries, Havel should be credited with “finally putting to rest one of the most alluring utopias of all time” (p.1).

     Havel may fairly be paired with Nelson Mandela, the most visible and best-known engineer of late 20th century transitions to democracy. Before becoming political leaders, both Mandela and Havel were jailed on account of their dissident activities.  Like Mandela, Havel advocated non-violence “not only as a matter of moral principle but as a weapon of political struggle” (p.437). But unlike Mandela, who was a man of action par excellence – a boxer as a young man, then a civil rights lawyer – Havel was an intellectual par excellence.

    Zantovsky describes Havel as a “formidable thinker, who consistently attempted to apply the results of his thinking process . . . to his practical engagement in the realm of politics” (p.1-2).  Havel’s deep thinking on the individual in the modern state is as much a part of his legacy as his actual steering of Czechoslovakia and the Czech Republic through the post-communist years.  Havel was already known as a playwright when he became a dissident challenging the communist regime in Czechoslovakia. If he had never entered politics, we would likely remember him as one of the 20th century’s most noteworthy  playwrights, on par with the likes of Eugène Ionesco, Samuel Beckett and Berthold Brecht.

     Zantovsky’s book divides into two roughly equal parts: Havel the playwright and dissident in the first half; Havel the politician in the second. Zantovsky himself is an important figure in his story. A clinical psychologist by training, a correspondent for Reuters, and once an aspiring rock music lyricist, Zantovsky served as a primary advisor and press secretary to Havel during the early transition years. Later, he received appointments as Czech Ambassador to both the United States and the United Kingdom. Zantovsky admits to having shared with Havel “many laughs, moments of sadness, quite a few drinks and some incredible moments together, both before and after he became president” (p.5).  Zantovsky’s insider’s view, seen only rarely in biography, does not preclude him from presenting a balanced portrait of his one-time boss that includes Havel’s shortcomings and failures. Zantovsky also indicates that this is his first book in English. His crisp, straightforward style, coupled with wry observations and humorous digressions, reveals a high comfort level not only with his subject but also with the English language.

* * *

     Havel’s early years coincided with the critical events that marked the life of his country and indelibly shaped his adult perspective. In 1938, when Havel was two, France and Great Britain abandoned the defense of Czechoslovakia to Nazi Germany in the infamous Munich accords of 1938, the “prime trauma of modern Czechoslovak history” (p.336). The Nazis invaded the country in March 1939. After their defeat in World War II, Stalinists in 1948 seized control of the Czechoslovak government in a non-violent putsch and instituted a communist regime that lasted four decades, until the Velvet Revolution of 1989 (these events are set forth in Prague Winter, the memoir of Madeline Albright, a fellow Czech native and prominent contemporary and friend of Havel, reviewed here in May 2013).

      Havel grew up in comfortable circumstances, but his moderately wealthy bourgeois background was not an asset once the communists came to power in Czechoslovakia. Havel’s privileged upbringing left him feeling “’alone, inferior, lost, ridiculed’ and humbled” (p.21). This feeling of being outcast, isolated and unfairly privileged, Zantovsky writes, “remained with Havel throughout his life. In his own thinking it endowed him with a lifelong perspective from ‘below’ or from the ‘outside.’” (p.21). From his teens onward, Havel was a “leader, setting agendas, walking at the front, showing the way. . . [but] with a diffidence, kindness and politeness so unwavering (and often unwarranted) that Havel himself caricatured it some of his plays” (p.3). At age 19, when he fell in love with his life-long partner and wife Olga Šplichalová, Havel already had the “gravitas of really believing what he was saying” (p.53).

     Havel’s bourgeois background precluded him from being accepted in an arts and science faculty at a Czech university. He was able to gain admission to a program in the economics of transportation, an arcane field that did not interest him, and he dropped out to join the Czech military. After completing military service, Havel became a playwright and an established member of the “shadow, non-conformist, bohemian underworld” (p.41) of the Prague intellectual class. Whatever he did  in the future, Zantovksy indicates, Havel’s loyalties always remained with this shadowy underworld.

     Havel’s plays explored how inauthenticity, alienation, the absurd, social isolation and depersonalization affected individuals in Czechoslovakia and totalitarian societies generally.   In one of his best known plays, “The Memorandum,” Havel posed the question of “passive participation in evil” (p.93), a question that he would return to repeatedly. Havel sought to demonstrate how totalitarian control drives individuals to “isolation, and makes them fear, suspect and avoid others” (p.95). But the human capacity to “’live the truth,’ to reaffirm man’s ‘authentic identity’” constitutes in all of Havel’s plays what Zantovsky terms the “nuclear weapon” that “gives power to the powerless. As soon as the system is no longer able to extract the ritual endorsement from its subjects, its ideological pretensions collapse as the lies they are” (p.200).

     In the 1960s, Havel gradually became associated with dissident opposition to the Communist regime. With his principal themes of identity, responsibility and the elusive notion of “living in truth” by this time fully formed, Havel came to the hazardous conclusion that rather than “waste time by hopelessly tinkering with the [communist] system in the effort of making it livable and sustainable, it was necessary to replace it as a whole” (p.96). Havel became a full-fledged leader of the dissident movement at the time of the “Czech spring” of 1968, when the Czechoslovak government sought to institute modest reforms under the guise of “socialism with a human face.” In August of that year, the Soviet Union brutally suppressed the fledgling reform movement in one of the “most massive overnight military invasions in European history” (p.115). Twenty oppressive years of what the communists termed “normalization” followed.

     The early years of so-called “normalization” following suppression of the Czech Spring were for Havel a period of “shapeless fog” (p.132). But by the mid-1970s, Havel had become the driving force behind the Czechoslovak dissident movement.  His essay, “The Power of the Powerless,” dissected the nature of the communist regime and argued that sustained opposition on the part of ordinary citizens could eventually topple it. Havel became one of the principal authors of Charter 77, written in late 1977 in response to the imprisonment of members of  Czech psychedelic rock band.  Charter 77 became the defining document of the Czech dissident movement and helped raise awareness in Western countries of human rights behind the Iron Curtain. The charter criticized the Czechoslovak government for failing to implement human rights provisions in its constitution and in a host of international instruments that it had signed.

     During his dissident years, Havel landed in prison on multiple occasions, the longest being nearly four years, between 1979 and 1983. While imprisoned, Havel wrote an extensive series of letters to his wife Olga, later published as “Letters to Olga” — “hybrids of creative writing, philosophy and political prose” (p.3). Although in jail when Czech dissident activities surged in the early 1980s, Havel was nonetheless directly or indirectly linked to these activities, as an “instigator, an inspiration, a spectator or as a friend. It almost appears as if he were a spider at the center of a web, spinning and waiting” (p.275). Around this time, Havel “must have realized himself that he was on a transitional trajectory from being an artist and dissident to becoming a politician” (p.275). His prison experience had made him “uniquely well prepared for the single-minded focus towards the tasks ahead, culminating with his leadership of the Velvet Revolution” (p.231).

     A dizzying six weeks after the Berlin Wall fell on November 9, 1989, Havel, leading a disparate group termed Civic Forum, became his country’s first freely elected president since the legendary post-World War I leader Tomas Masaryk.  Havel “probably never dreamt about being president, nor did he particularly wish to assume the office. Throughout his life he thought of himself primarily as a writer; what people thought about his writing affected him much more personally than what they thought about him as a politician” (p.317). But in what Zantovsky terms the “reality play of his life,” Havel had “set the stage in such a way that, when the final act arrived, the logic of the piece inexorably led him to assume the leading position” (p.317) as the newly independent state set upon an uncertain transformation away from totalitarian rule and toward democracy.

* * *

     Both as a protester and as a politician, Havel advocated what Zantovsky terms “socialist humanism,” an idealized version of the social welfare states of Western Europe. Despite his voracious reading and self-education, when Havel became president he was “ignorant of the fundamentals of economic theory” and “totally unfamiliar with the practical workings of a real economy” (p.392-93). Only “grudgingly” did Havel come to “acknowledge, and even to respect the role of political organizations as agents of change and condensers of political energy” (p.204). In an interview after he left office, Havel said that his most serious mistake as president was that he had “not more energetically promoted his vision of a humanistic and moral society during his time as president.” To many people, especially his detractors, Zantovsky wryly notes, “he had done little else” (p.459).

     Havel seemed embarrassed by the power that his political position yielded, “always wary of trying to elevate himself or of exaggerating his own importance” (p.405). In leading the transition away from communism and toward democracy, one of Havel’s strengths, but arguably also a weakness, was that he rejected the “concept of the Enemy.” He consistently went out of his way to “understand rather than to demonize the motives of the other side and, if at all possible, always to extend to them the benefit of the doubt” (p.108-09). Havel’s conciliatory approach “led to accusations that he was soft on the exponents of the previous regime, or even that there was possible some secret collusion between them” (p.109).

     The most significant issue Havel had to deal with as President of Czechoslovakia was the Velvet Divorce, when a Slovak independence movement split the country in July 1992 into a new Czech Republic and a southern and eastern neighbor, Slovakia. Havel could not endorse separation, which “ran against the grain of his conviction, his philosophy, his understanding of democracy and his sense of responsibility” (p.419). But neither could Havel take a “heroic stand” against separation, “in view of the risks and uncertainties this would pose for 15 million of his fellow citizens” (p.419). It was better to have two functioning countries than a single, dysfunctional one, Havel reasoned. Havel resigned as president of Czechoslovakia after Slovakia’s official July 1992 declaration of independence.  He had no involvement in the working out of details of the separation over the following six months. But he was persuaded to run for the presidency of the new Czech Republic and became its first president in January 1993.

     As Czech President, Havel had a complicated relationship with Vaclav Klaus, his Prime Minister who went on to succeed Havel as Czech President in 2003.  Klaus was in many ways the opposite of Havel. A free-market economist, Klaus battled with Havel over the “character of Czech society and over the values and principles it should abide by. For Klaus, these values could be reduced to individual economic and political freedom and a vague allegiance to the national community as the conduit of history, culture and traditions” (p.456). Klaus was a Eurosceptic, whereas Havel “emphasized time and time again the great opportunity that the process of European integration offered for ‘civilizational self-reflection,’ and promoted the idea of ‘Europe as a mission’” (p.449). Havel’s relationship with his Polish counterpart Lech Walęsa, another hero in Eastern Europe’s transition to democracy, was less complicated, in no small measure because Walęsa shared Havel’s dedication to European integration for former Warsaw Pact  countries.

     Walęsa embodied the “heroic past of the Polish nation, with its brave if sometimes futile resistance to foreign oppressors,” whereas Havel “exemplified the fundamental unity of Central Europe with the rest of the West in terms of culture, philosophy and political thinking” (p.444). But despite differences in the two men’s character and outlook, they were a forceful single voice for expansion of NATO to Eastern Europe and accession of former Iron Curtain countries into the European Union, which both US President Bill Clinton and major Western European leaders initially opposed. Havel and Walęsa “complemented each other as well as any pair since Laurel and Hardy. It is hard to imagine that the enlargement would have occurred without either of them,” Zantovsky contends. “If most of Europe today is safer than at any time in its history, it is not least thanks to the vision of statesmen like Bill Clinton, Lech Walęsa. . . and Václav Havel” (p.444-45).

     When Havel left the Czech presidency in 2003, he was a widely known and respected figure worldwide, and he traveled extensively throughout the world.  His wife Olga had died in 1996 and Havel married an actress (Havel had more than his fair share of extra-marital affairs while married to Olga, which Zantovsky mentions but does not dwell upon). Havel became a Visiting Fellow at the Library of Congress in Washington, where he wrote a memoir, “To the Castle and Back,” which Zantovksy describes as an “existential mediation on the meaning of life, politics and love, for which the presidency is not much more than a backdrop” (p.504). He also wrote a play, “Leaving,” that seemed to foreshadow his own death. After several years of declining health, brought about in part by a lifetime of heavy cigarette smoking, Havel died at his country home in December 2011, age 75.

* * *

     Zantovsky summarizes the “remarkable balance sheet” of his former boss’ presidency by noting that Havel should be given credit for the “peaceful transformation of the country from totalitarian rule to democracy; [and] for building a stable system of democratic and political institutions, comparable in most respects, flaws included, to long-existing systems in the West” (p.497). Further, Havel “successfully brought the country back to Europe and made it an integral part of Western political and security alliances; and he remained an inspiration and identifiable supporter in the struggle for human rights and freedoms around the world” (p.497). Even the Velvet Divorce, Havel’s greatest setback as a political leader, was mitigated by its “peaceful and consensual character” (p.497).

    Yet Zantovsky also notes in his affectionate portrait that Havel “conspicuously failed at making the society at large adhere to his ideals of morality, tolerance, and civic spirit, but that said more about society than about him. Arguably, he had never expected to succeed fully” (p.497). Today, the ideals of this enigmatic, brilliant man and reluctant politician seem far more elusive than in Havel’s time.

Thomas H. Peebles
La Châtaigneraie, France
November 19, 2016

5 Comments

Filed under Biography, Eastern Europe, European History, History

Can’t Forget the Motor City

detroit-1

detroit-2

detroit-3

David Maraniss, Once In a Great City: A Detroit Story

     In 1960, Detroit was the automobile capital of the world, America’s undisputed center of manufacturing, and its fifth most populous city, with that year’s census tallying 1.67 million people. Fifty years later, the city had lost nearly a million people; its population had dropped to 677,000 and it ranked 21st in population among America’s cities in the 2010 census. Then, in 2013, the city reinforced its image as an urban basket case by ignominiously filing for bankruptcy. In Once In a Great City: A Detroit Story, David Maraniss, a native Detroiter of my generation and a highly skilled journalist whose previous works include books on Barack Obama, Bill Clinton and Vince Lombardi, focuses upon Detroit before its precipitous fall, an 18-month period from late 1962 to early 1964.   This was the city’s golden moment, Maraniss writes, when Detroit “seemed to be glowing with promise. . . a time of uncommon possibility and freedom when Detroit created wondrous and lasting things” (p.xii-xiii; in March 2012, I reviewed here two books on post World War II Detroit, under the title “Tales of Two Cities”).

      Detroit produced more cars in this 18 month period than Americans produced babies.  Barry Gordy Jr.’s popular music empire, known officially and affectionately as “Motown,” was selling a new, upbeat pop music sound across the nation and around the world.  Further, at a time when civil rights for African-Americans had become America’s most morally compelling issue, race relations in a city then about one-third black appeared to be as good as anywhere in the United States. With a slew of high-minded officials in the public and private sector dedicated to racial harmony and justice, Detroit sought to present itself as a model for the nation in securing opportunity for all its citizens.

     Maraniss begins his 18-month chronicle with dual events on the same day in November 1962: the burning of an iconic Detroit area memorial to the automobile industry, the Ford Rotunda, a “quintessentially American harmonic convergence of religiosity and consumerism” (p.1-2); and, later that afternoon, a police raid on the Gotham Hotel, once the “cultural and social epicenter of black Detroit” (p.10), but by then considered to be a den of illicit gambling controlled by organized crime groups.  He ends with President Lyndon Johnson’s landmark address in May 1964 on the campus of nearby University of Michigan in Ann Arbor, where Johnson outlined his grandiose vision of the Great Society.  Johnson chose Ann Arbor as the venue to deliver this address in large measure because of its proximity to Detroit. No place seemed “more important to his mission than Detroit,” Maraniss writes, a “great city that honored labor, built cars, made music, promoted civil rights, and helped lift working people into the middle class” (p.360).

     Maraniss’ chronicle unfolds between these bookend events, revolving around on what had attracted President Johnson to the Detroit area in May 1964: building cars, making music, promoting civil rights, and lifting working people into the middle class. He skillfully weaves these strands into an affectionate, deeply researched yet easy-to-read portrait of Detroit during this 18-month golden period.  But Maraniss  does not ignore the fissures, visible to those perceptive enough to recognize them, which would lead to Detroit’s later unraveling.  Detroit may have found the right formula for bringing a middle class life style to working class Americans, black and white alike. But already Detroit was losing population as its white working class was taking advantage of newfound prosperity to leave the city for nearby suburbs.  Moreover, many in Detroit’s black community found the city to be anything but a model of racial harmony.

* * *

     An advertising executive described Detroit in 1963 as “intensely an automobile community – everybody lives, breathes, and sleeps automobiles. It’s like a feudal city ” (p.111). Maraniss’ inside account of Detroit’s automobile industry focuses principally upon the remarkable relationship between Ford Motor Company’s chief executive, Henry Ford II (sometimes referred to as “HF2” or “the Deuce”) and the head of the United Auto Workers, Walter Reuther, during this 18 month golden age (Manariss accords far less attention to the other two members of Detroit’s “Big Three,” General Motors and Chrysler, or to the upstart American Motors Corporation, whose chief executive, George Romney, was elected governor in November 1962 as a Republican). Ford and Reuther could not have been more different.

     Ford, from Detroit’s most famous industrial family, was a graduate of Hotchkiss School and Yale University who had been called home from military service during World War II to run the family business when his father Edsel Ford, then company president, died in 1943. Maraniss mischievously describes the Deuce as having a “touch of the peasant, with his manicured nails and beer gut and . . . frat-boy party demeanor” (p.28). Yet, Ford earnestly sought to modernize a company that he thought had grown too stodgy.  And, early in his tenure, he had famously said, “Labor unions are here to stay” (p.212).

      Reuther was a graduate of the “school of hard knocks,” the son of German immigrants whose father had worked in the West Virginia coalmines.   Reuther himself had worked his way up the automobile assembly line hierarchy to head its powerful union. George Romney once called Reuther the “most dangerous man in Detroit” (p.136). But Reuther prided himself on “pragmatic progressivism over purity, getting things done over making noise. . . [He was] not Marxist but Rooseveltian – in his case meaning as much Eleanor as Franklin” (p.136). Reuther believed that big government was necessary to solve big problems. During the Cold War, he won the support of Democratic presidents by “steering international trade unionists away from communism” (p.138).

     A quarter of a century after the infamous confrontation between Reuther and goons recruited by the Deuce’s grandfather Henry Ford to oppose unionization in the automobile industry — an altercation in which Reuther was seriously injured — the younger Ford’s partnership with Reuther blossomed. Rather than bitter and violent confrontation, the odd couple worked together to lift huge swaths of Detroit’s blue-collar auto workers into the middle class – arguably Detroit’s most significant contribution to American society in the second half of the 20th century. “When considering all that Detroit has meant to America,” Maraniss writes, “it can be said in a profound sense that Detroit gave blue-collar workers a way into the middle class . . . Henry Ford II and Walter Reuther, two giants of the mid-twentieth century, were essential to that result” (p.212).

      Reuther was aware that, despite higher wages and improved benefits, life on the assembly lines remained “tedious and soul sapping if not dehumanizing and dangerous” for autoworkers (p.215). He therefore consistently supported improving leisure time for workers outside the factory.  Music was one longstanding outlet for Detroiters, including its autoworkers. The city’s rich history of gospel, jazz and rhythm and blues musicians gave Detroit an “unmatched creative melody” (p.100), Maraniss observes.   By the early 1960s, Detroit’s musical tradition had become identified with the work of Motown founder, mastermind and chief executive, Berry Gordy Jr.

     Gordy was an ambitious man of “inimitable skills and imagination . . . in assessing talent and figuring out how to make it shine” (p.100).  Gordy aimed to market his Motown sound to white and black listeners alike, transcending the racial confines of the traditional rhythm and blues market. He set up what Maraniss terms a “musical assembly line” that “nurtured freedom through discipline” (p.195) for his many talented performers. The songs which Gordy wrote and championed captured the spirit of working class life: “clear story lines, basic and universal music for all people, focusing on love and heartbreak, work and play, joy and pain” (p.53).

     Gordy’s team included a mind-boggling array of established stars: Mary Wells, Marvin Gaye, Smokey Robinson and his Miracles, Martha Reeves and her Mandelas, Diana Ross and her Supremes, and the twelve-year-old prodigy, Little Stevie Wonder.  Among Gordy’s rising future stars were the Temptations and the Four Tops. The Motown team was never more talented than in the summer of 1963, Maraniss contends. Ten Motown singles rose to Billboard’s Top 10 that year, and eight more to the Top 20.  Wonder, who dropped “Little” before his name in 1963, saw his “Fingertips Part 2” rocket up the charts to No. 1.  Martha and the Vandellas made their mark with “Heat Wave,” a song with “irrepressibly joyous momentum” (p.197).  But the title could have referred equally to the rising intensity of the nationwide quest for racial justice and civil rights for African-Americans that summer.

      In June 1963, nine weeks before the 1963 March on Washington, Maraniss reminds us that Dr. Martin Luther King, Jr. delivered the outlines of his famous “I Have a Dream” speech at the end of a huge Detroit “Walk to Freedom” rally that took place almost exactly 20 years after a devastating racial confrontation between blacks and whites in wartime Detroit. The Walk drew an estimated 100,000 marchers, including a significant if limited number of whites. What King said that June 1963 afternoon, Maraniss writes, was “virtually lost to history, overwhelmed by what was to come, but the first time King dreamed his dream at a large public gathering, he dreamed it in Detroit” (p.182). Concerns about disorderly conduct and violence preceded both the Detroit Walk to Freedom and the March on Washington two months later. Yet, the two  events were for all practical purposes free of violence.  Just as the March on Washington energized King’s non-violent quest for Civil Rights nation-wide, the Walk to Freedom buoyed Detroit’s claim to be a model of racial justice in the urban north.

      In the Walk for Freedom and in the nationwide quest for racial justice, Walter Reuther was an unsung hero. Under Reuther’s leadership, the UAW made an “unequivocal moral and financial commitment to civil rights action and legislation” (p.126).   Once John Kennedy assumed the presidency, Reuther consistently pressed the administration to move on civil rights.  The White House in turn relied on Reuther to serve as a liaison to black civil rights leaders, especially to Dr. King and his southern desegregation campaign. The UAW functioned as what Maraniss  terms the “bank” (p.140) of the Civil Right movement, providing needed funding at critical junctures. To be sure, Maraniss emphasizes, not all rank-and-file UAW members shared Reuther’s passionate commitment to the Walk for Freedom, the March on Washington, or to the cause of civil rights for African-Americans.

     Even within Detroit’s black community, not all leaders supported the Walk for Freedom. Maraniss  provides a close look at the struggle between the Reverend C.L. Franklin and the Reverend Albert Cleage for control over the details of the March for Freedom and, more generally, for control over the direction of the quest for racial justice in Detroit. Reverend Franklin, Detroit’s “flashiest and most entertaining preacher” (p.12; also the father of singer Aretha, who somehow escaped Gordy’s clutches to perform for Columbia Records and later Atlantic), was King’s closest ally in Detroit’s black community. Cleage, whose church later became known as the Shrine of the Black Madonna, founded on the belief that Jesus was black, was not wedded to Dr. King’s brand of non-violence. Cleage sought to limit the influence of Reuther, the UAW and whites generally in the Walk for Freedom. Franklin was able to retain the upper hand in setting the terms and conditions for the June 1963 rally.  But the dispute between Reverends Franklin and Cleage reflected the more fundamental difference between black nationalism and Martin Luther King style integration, and was thus an “early formulation of a dispute that would persist throughout the decade” (p.232),

     In November of 1963, Cleage sponsored a conference that featured black nationalist Malcolm X’s “Message to the Grass Roots,” an important if less well known counterpoint to King’s “I Have A Dream” speech in Washington in August of that year.  In tone and substance, Malcolm’s address “marked a break from the past and laid out a path for the black power movement to follow from then on” (p.279). Malcolm referred in his speech to the highly publicized police killing of prostitute Cynthia Scott the previous summer, which had generated outrage throughout Detroit’s black community and exacerbated long simmering tensions between the community and a police force that was more than 95% white.

     Scott’s killing “discombobulated the dynamics of race in the city. Any communal black and white sensibility resulting from the June 23 [Walk to Freedom] rally had dissipated, and the prevailing feeling was again us versus them” (p.229).  The tension between police and community did not abate when Police Commissioner George Edwards, a long standing liberal who enjoyed strong support within the black community, considered the Scott case carefully and ruled that the shooting was “regrettable and unwise . . . but by the standards of the law it was justified” (p.199).

      Then there was the contentious issue of a proposed Open Housing ordinance that would have forbidden property owners from refusing to sell their property on the basis of race. The proposed ordinance required passage from the city’s nine person City Council, elected at large in a city that was one-third black – no one on the council represented directly the city’s black neighborhoods. The proposal was similar in intent to future national legislation, the Fair Housing Act of 1968, and had the enthusiastic support of Detroit’s progressive Mayor, Jerome Cavanaugh, a youthful Irish Catholic who deliberately cast himself as a mid-western John Kennedy.

      But the proposal evoked bitter opposition from white homeowner associations across the city, revealing the racial fissures within Detroit. “On one side were white homeowner groups who said they were fighting on behalf of individual rights and the sanctity and safety of their neighborhoods. On the other side were African American churches and social groups, white and black religious leaders, and the Detroit Commission on Community Relations, which had been established . . . to try to bridge the racial divide in the city” (p.242).   Notwithstanding the support of the Mayor and leaders like Reuther and Reverend Franklin, white homeowner opposition doomed the proposed ordinance. The City Council rejected the proposal 7-2, a stinging rebuke to the city’s self-image as a model of racial progress and harmony.

       Detroit’s failed bid for the 1968 Olympics was an equally stinging rebuke to the self-image of a city that loved sports as much as music. Detroit bested more glamorous Los Angeles for the right to represent the United States in international competition for the games. A delegation of city leaders, including Governor Romney and Mayor Cavanaugh, traveled to Baden Baden, Germany, where they made a well-received presentation to the International Olympic Committee. While Detroit was making its presentation, the Committee received a letter from an African American resident of Detroit who alluded to the Scott case and the failed Open Housing Ordinance to argue against awarding the games to the city on the ground that fair play “has not become a living part of Detroit” (p.262). Although bookmakers had made Detroit a 2-1 favorite for the 1968 games, the Committee awarded them to Mexico City. Its selection was based largely upon what Maraniss considers Cold War considerations, with Soviet bloc countries voting against Detroit. The delegation dismissed the view that the letter to the Committee might have undermined Detroit’s bid, but its actual effect on the Committee’s decision remains undetermined.

         Maraniss asks whether Detroit might have been able to better contain or even ward off the devastating 1967 riots had it been awarded the 1968 Olympic games. “Unanswerable, but worth pondering” is his response (p.271). In explaining the demise of Detroit, many, myself included, start with the 1967 riots which in a few short but violent days destroyed large swaths of the city, obliterating once solid neighborhoods and accelerating white flight to the suburbs.  But Maraniss emphasizes that white flight was already well underway long before the 1967 disorders. The city’s population had dropped from just under 1.9 million in the 1950 census to 1.67 million in 1960. In January of 1963, Wayne State University demographers published “The Population Revolution in Detroit,” a study which foresaw an even more precipitous emigration of Detroit’s working class in the decades ahead. The Wayne State demographers “predicted a dire future long before it became popular to attribute Detroit’s fall to a grab bag of Rust Belt infirmities, from high labor costs to harsh weather, and before the city staggered from more blows of municipal corruption and incompetence. Before any of that, the forces of deterioration were already set in motion” (p..91). Only a minor story in January 1963, the findings and projections of the Wayne State study in retrospect were of “startling importance and haunting prescience” (p.89).

* * *

      My high school classmates are likely to find Maraniss’ book a nostalgic trip down memory lane: his 18 month period begins with our senior year in a suburban Detroit high school and ends with our freshman college year — our own time of soaring youthful dreams, however unrealistic. But for those readers lacking a direct connection to the book’s time and place, and particularly for those who may still think of Detroit only as an urban basket case, Maraniss provides a useful reminder that it was not always thus.  He nails the point in a powerful sentence: “The automobile, music, labor, civil rights, the middle class – so much of what defines our society and culture can be traced to Detroit, either made there or tested there or strengthened there” (p.xii).  To this, he could have added, borrowing from Martha and the Vandellas’ 1964 hit, “Dancing in the Streets,” that America can’t afford to forget the Motor City.

 

                   Thomas H. Peebles

Berlin, Germany

October 28, 2016

9 Comments

Filed under American Politics, American Society, United States History

Becoming FLOTUS

michelleo-1

michelleo-2

Peter Slevin, Michelle Obama: A Life 

             In Michelle Obama: A Life, Peter Slevin, a former Washington Post correspondent presently teaching at Northwestern University, explores the improbable story of Michelle LaVaughn Robinson, now Michelle Obama, the First Lady of the United States (a position known affectionately in government memos as “FLOTUS”). Slevin’s sympathetic yet probing biography shows how Michelle’s life was and still is shaped by the blue collar, working class environment of Chicago’s South Side, where she was born and raised. Michelle’s life in many ways is a microcosm of 20th century African-American experience. Michelle’s ancestors were slaves, and her grandparents were part of the “Great Migration” of the first half of the 20th century that sent millions of African-Americans from the rigidly segregated south to northern urban centers in search of a better life.  Michelle was born in 1964, during the high point of the American civil rights movement, and is thus part of the generation that grew up after that movement had widened the opportunities available to African Americans.

            The first half of the book treats Michelle’s early life as a girl growing up on the South Side of Chicago and her experiences as an African-American at two of America’s ultra-elite institutions, Princeton University and Harvard Law School.  The centerpiece of this half is the loving environment that Michelle’s parents, Fraser Robinson III and his wife Marian Shields Robinson, created for Michelle and her older brother Craig, born two years earlier in 1962.  The Robinson family emphasized the primacy of education as the key to a better future, along with hard work and discipline, dedication to family, regular church attendance, and community service.

            Michelle’s post-Harvard professional and personal lives form the book’s second half. Early in her professional career, Michelle met a young man from Hawaii with an exotic background and equally exotic name, Barack Hussein Obama. Slevin provides an endearing account of their courtship and marriage (their initial date is also the subject of a recent movie “Southside With You”). Once Barack enters the scene, however, the story becomes as much about his entry and dizzying rise in politics as it is about Michelle, and thus likely to be familiar to many readers.

            But in this half of the book, we also learn about Michelle’s career in Chicago; how she balanced her professional obligations with her parental responsibilities; her misgivings about the political course Barack seemed intent upon pursuing; her at first reluctant, then full throated support for Barack’s long-shot bid for the presidency; and how she elected to utilize the platform which the White House provided to her as the FLOTUS.  Throughout, we see how Michelle retained the values of her South Side upbringing.

* * *

        Slevin provides an incisive description of 20th century Chicago, beginning in the 1920s, when Michelle’s grandparents migrated from the rural south.  He emphasizes the barriers that African Americans experienced, limiting where they could live and work, their educational opportunities, and more. Michelle’s father Fraser, after serving in the U.S. army, worked in a Chicago water filtration plant up to his death in 1991 from multiple sclerosis at age 55. Marian, still living (‘the First Grandmother”), was mainly a “stay-at-home Mom.”  In a city that “recognized them first and foremost as black,” Fraser and Marian refused to utilize the oppressive shackles of racism as an excuse for themselves or their children.  The Robinson parents “saw it as their mission to provide strength, wisdom, and a measure of insulation to Michelle and Craig” (p.26). Their message to their children was that no matter what obstacles they faced because of their race or their working class roots, “life’s possibilities were unbounded. Fulfillment of those possibilities was up to them. No excuses” (p.47).

     The South Side neighborhood where Michelle and Craig were raised, although part of Chicago’s rigidly segregated housing patterns, offered a stable and secure environment, with well-kept if modest homes and strong neighborhood schools. The neighborhood and the Robinson household provided Michelle and Craig with what Craig later termed the “Shangri-La of upbringings” (p.33).  Fraser and Marian both regretted deeply that they were not college graduates. The couple consequently placed an unusually high premium on education for their children, adopting a savvy approach which parents today would be wise to emulate.

      Learning to read and write  for the two Robinson children was a means toward the even more important goal of learning to think. Fraser and Marian advised their children to “use their heads, yet not to be afraid to make mistakes – in each case learning from what goes wrong” (p.46).  We told them, Marian recounted, “Make sure you respect your teachers, but don’t hesitate to question them. Don’t even allow even us to say just anything to you” (p.47). Fraser and Marian granted their children freedom to explore, test ideas and make their own decisions, but always within a framework that emphasized “hard work, honesty, and self-discipline. There were obligations and occasional punishment. But the goal was free thinking” (p.46).

       Both Robinson children were good students, but with diametrically opposite study methods. Michelle was methodical and obsessive, putting in long hours, while Craig largely coasted to good grades. Michelle went to Princeton in part because Craig was already a student there, but she did so with misgivings and concerns that she might not be up to its high standards. Prior to Princeton, Craig and Michelle had had little exposure to whites. If they experienced animosity in their early years, Slevin writes, it was “likely from African American kids who heard their good grammar, saw their classroom diligence, and accused them of ‘trying to sound white’” (p.49). At Princeton, however, a school which “telegraphed privilege” (p.71), Michelle began a serious contemplation of what it meant to be an African-American in a society where whites held most of the levers of power.

       As an undergraduate between 1982 and 1986, Michelle came to see a separate black culture existing apart from white culture. Black culture had its own music, language, and history which, as she wrote in a college term paper, should be attributed to the “injustices and oppressions suffered by this race of people which are not comparable to the experience of any other race of people through this country’s history” (p.91). Michelle observed that black public officials must persuade the white community that they are “above issues of race and that they are representing all people and not just Black people” (p.91-92). Slevin notes that Michelle’s description “strikingly foreshadowed a challenge that she and her husband would face twenty two years later as they aimed for the White House” (p.91). Michelle’s college experience was a vindication of the framework Fraser and Marian had created that allowed Michelle to flourish. At Princeton, Michelle learned that the girl from blue collar Chicago could “play in the big leagues” (p.94), as Slevin puts it.

            In the fall of 1986, Michelle entered Harvard Law School, another “lofty perch, every bit as privileged as Princeton, but certainly more competitive once classes began” (p.95). In law school, she was active in an effort to bring more African American professors to a faculty that was made up almost exclusively of white males. She worked for the Legal Aid Society, providing services to low income individuals. When she graduated from law school in 1989, she returned to Chicago – it doesn’t seem that she ever considered other locations. But, notwithstanding her activist leanings as a student, she chose to work as an associate in one of Chicago’s most prestigious corporate law firms, Sidley and Austin.

       Although located only a few miles from the South Side neighborhood where Michelle had grown up, Sidley and Austin was a world apart, another bastion of privilege, with some of America’s best known and most powerful businesses as its clients. The firm offered Michelle the opportunity to sharpen her legal skills, particularly in intellectual property protection and, at least equally importantly, pay off some of her student loans. But, like many idealistic young law graduates, she did not find work in a corporate law firm satisfying and left after two years.

        Michelle landed a job with the City of Chicago as an assistant to Valerie Jarret, then the City of Chicago’s Commissioner for Planning and Economic Development, who later became a valued White House advisor to President Obama. Michelle’s position was more operational than legal, serving as a “trouble shooter” with a discretionary budget that could be utilized to advance city programs at the neighborhood level on subjects as varied as business development, infant mortality, mobile immunization, and after school programs. But working for the City of Chicago was nothing if not political, and Michelle left after 18 months to take a position in 1993 at the University of Chicago, located on Chicago’s South Side, not far from where she grew up.

    Although still another of America’s most prestigious educational institutions, the University of Chicago had always seemed like hostile territory to Michelle, incongrous with its surrounding low and middle-income neighborhoods. But Michelle landed a position with a university program, Public Alliance, designed to improve the University’s relationship with the surrounding communities. Notwithstanding her lack of warm feelings for the university, the position was an excellent fit.  It afforded Michelle the opportunity to try her hand at bridging some of the gaps between the university and its less privileged neighbors.

          After nine years  with Public Allies, Michelle took a position in 2002 with the University of Chicago Hospital, again involved in public outreach, focused on the way the hospital could better serve the medical needs of the surrounding community. This position, Slevin notes, brought home to Michelle the massive inequalities within the American health care system, divided between the haves with affordable insurance and the have nots without it.  Michelle stayed in this position until early 2008, when she left to work on her husband’s long shot bid for the presidency. In her positions with the city and the university, Michelle developed a demanding leadership style for her staffs that she brought to the White House: result-oriented, given to micro-management, and sometimes “blistering” (p.330) to staff members whose performance fell short in her eyes.

* * *

       While working at Sidley and Austin, Michelle interviewed the young man from Hawaii, then in his first year at Harvard Law School, for a summer associate position. Michelle in Slevin’s account found the young man “very charming” and “handsome,” and sensed that, as she stated subsequently, he “liked my dry sense of humor and my sarcasm” (p.121). But if there was mutual attraction, it was the attraction of opposites. Barack Obama was still trying to figure out where his roots lay. Michelle Robinson, quite obviously, never had to address that question. Slevin notes that the contrast could “hardly have been greater” between Barack’s “untethered life and the world of the Robinson and Shields clans, so numerous and so firmly anchored in Chicago. He felt embraced and it surprised him” (p.128; Barack’s untethered life figures prominently in Janny Scott’s biography of Barack’s mother, Ann Dunham, reviewed here in July 2012).  For Barack, meeting the Robinson family for the first time was, as he later wrote, like “dropping in on the set of Leave It to Beaver” (p.127).  The couple married in 1992.

        Barack served three 2-year terms in the Illinois Senate, from 1997 to 2004. In 2000, he ran unsuccessfully for the United States House of Representatives, losing in a landslide. He had his breakthrough moment in 2004, when John Kerry, the Democratic Presidential candidate, invited him to deliver a now famous keynote address to that year’s Democratic National Convention.  Later that year, he won  a vacant seat in the United States Senate  by a landslide when his Republican opponent had to drop out due to a sex scandal.  In early 2007, he decided to run for the presidency.

       Michelle’s mistrust of politics was “deeply rooted and would linger long into Barack’s political career” (p.161), Slevin notes.  Her distrust was at the root of discernible frictions within their marriage, especially after their daughters were born — Malia in 1998 and Sasha in 2001. Barack’s political campaigning and professional obligations kept him away from home much of the time, to Michelle’s dismay. Michelle felt that she had accomplished more professionally than Barack, and was also saddled with parental duties in his absence. “It sometimes bothered her that Barack’s career always took priority over hers. Like many professional women of her age and station, Michelle was struggling with balance and a partner who was less involved – and less evolved – than she had expected” (p.180-81).

        Michelle was, to put it mildly, skeptical when her husband told her in 2006 that he was considering running for the presidency. She worried about further losing her own identity, giving up her career for four years, maybe eight, and living with the real possibility that her husband could be assassinated. Yet, once it became apparent that Barack was serious about such a run and had reached the “no turning back” point, Michelle was all in.  She became a passionate, fully committed member of Barack’s election team, a strategic partner who was “not shy about speaking up when she believed the Obama campaign was falling short” (p.219).

         With Barack’s victory over Senator John McCain in the 2008 presidential election, Michelle became what Slevin terms the “unlikeliest first lady in modern history” (p.4). The projects and messages she chose to advance as FLOTUS “reflected a hard-won determination to help working class and the disadvantaged, to unstack the deck. She was more urban and more mindful of inequality than any first lady since Eleanor Roosevelt” (p.5). Michelle reached out to children in the less favored communities in Washington, mostly African American, and thereafter to poor children around the world. She also concentrated on issues of obesity, physical fitness and nutrition, famously launching a White House organic vegetable garden. She developed programs to support the wives of American military personnel deployed in Iraq and Afghanistan, women struggling to “keep a toehold in the middle class” (p.293).

        In Barack’s second term, she adopted a new mission, called Reach Higher, which aimed to push disadvantaged teenagers toward college. Throughout her time as FLOTUS, Michelle tried valiantly to provide her two daughters with as close to a normal childhood as life in the White House bubble might permit. Slevin’s account stops just prior to the 2014 Congressional elections, when the Republicans gained control of the United States Senate, after gaining control of the House of Representatives in the prior mid-term elections in 2010.

       Slevin does not overlook the incessant Republican and conservative critics of Michelle. She appeared to many whites in the 2008 campaign as an “angry black woman,” which Slevin dismisses as a “simplistic and pernicious stereotype” (p.236). Right wing commentator Rush Limbaugh began calling her “Moochelle,” much to the delight of his listening audience. The moniker conjured images of a fat cow or a leech – synonymous with the term “moocher” which Ayn Rand used in her novels to describe those who “supposedly lived off the hard work of the producers” (p.316) — all the while slyly associating Michelle with “big government, the welfare state, big-spending Democrats, and black people living on the dole” (p.315).  Vitriol such as this, Slevin cautiously concludes, “could be traced to racism and sexism or, at a charitable minimum, a lack of familiarity with a black woman as accomplished and outspoken as Michelle” (p.286). In addition, criticism emerged from the political left, which “viewed Michelle positively but asked why, given her education, her experience, and her extraordinary platform, she did not speak or act more directly on a host of progressive issues, whether abortion rights, gender inequity, or the structural obstacles facing the urban poor” (p.286).

* * *

       Slevin’s book is not hagiography. As a conscientious biographer whose credibility is directly connected to his objectivity, Slevin undoubtedly looked long and hard for the Michelle’s weak points and less endearing qualities. He did not come up with much, unless you consider being a strong, focused woman a negative quality. There is no real dark side to Michelle Obama in Slevin’s account, no apparent skeletons in any of her closets. Rather, the unlikely FLOTUS depicted here continues to reflect the values she acquired while growing up in Fraser and Marian Robinson’s remarkable South Side household.

 

Thomas H. Peebles

La Châtaigneraie, France

September 17, 2016

 

 

 

 

 

5 Comments

Filed under American Politics, American Society, Biography, Gender Issues, Politics, United States History

Changing the Definition of Literature in the Eyes of the Law

Joyce.1

Joyce.2

Kevin Birmingham, The Most Dangerous Book:
The Battle for James Joyce’s Ulysses

      James Joyce’s enigmatic masterpiece novel Ulysses was first published in book form in France in 1922. Portions of the novel had by then already appeared as magazine excerpts in the United States and Great Britain. The previous year, a court in the United States had declared several such excerpts obscene, and British authorities  followed suit in 1923. In The Most Dangerous Book: The Battle for James Joyce’s Ulysses, Kevin Birmingham describes the furor which the novel provoked and the scheming that was required to bring the novel to readers.

     Birmingham, a lecturer in history and literature at Harvard, characterizes his work as the “biography of a book” (p.2). Its core is the twofold story of the many benefactors who aided Joyce in maneuvering around publication obstacles; and of the evolution of legal standards for judging literature claimed to be obscene. Birmingham also provides much insight into Joyce the author, his view of art, and the World War I era literary world in which he operated. The book, Birmingham’s first, further serves as a useful introduction to Ulysses itself for those readers, myself emphatically included, who have not yet garnered the courage to tackle Joyce’s masterpiece.

     Ulysses depicted a single day in Dublin, June 16, 1904. On the surface, the novel follows three central characters, Stephen Daedalus, Leopold Bloom, and his wife Molly Bloom. But Ulysses is also a retelling of Homer’s Odyssey, with the three main characters serving as modern versions of Telemachus, Ulysses, and Penelope. Peering into the 20th century through what Birmingham terms the “cracked looking glass of antiquity” (p.54), Joyce sought to capture both the erotic pleasures and intense pains of the human body; fornication and masturbation, defecation and disease were all part of the human experience that Joyce sought to convey. He even termed his work an “epic of the human body” (p.14).

     Treating sexuality in a more forthright manner than what public authorities in the United States and Great Britain were willing to countenance — sex at the time “just wasn’t something a legitimate novelist portrayed” (p.64) — Ulysses was deemed a threat to public morality, and was subject to censorship, confiscation and book burning spectacles. But the charges levied against Ulysses were about “more than the right to publish sexually explicit material” (p.6), Birmingham contends. They also involved a clash between two rising forces, modern print culture and modern governmental regulatory power, and were thus part of a larger struggle between state authority and individual freedom that intensified in the early twentieth century, “when more people began to challenge governmental control over whatever speech the state considered harmful” (p.6).

     There is a meandering quality to much of Birmingham’s narrative, which shifts back and forth between Joyce himself, his literary friends and supporters, and those who challenged Ulysses in the name of public morality. At times, it is difficult to tie these threads together. But the book regains its footing in a final section describing the definitive trial and landmark 1934 judicial ruling, the case of United States vs. One Book Called Ulysses, which held that the novel was not obscene. The decision constituted the last significant hurdle for Joyce’s book, after which it circulated freely to readers in the United States and elsewhere.  In his section on this case, Birmingham’s central point comes into full focus:  Ulysses changed not only the course of literature but also the “very definition of literature in the eyes of the law” (p.2).

* * *

     James Joyce was born in Dublin in 1882, educated at Catholic schools and University College, Dublin. As a boy, Joyce and his family moved so frequently within Dublin that Joyce could plausibly claim to know almost all the city’s neighborhoods.  But Joyce spent little of his professional career in Dublin. Sometime in 1903 or 1904, Joyce met and fell in love with Nora Barnacle, a chambermaid from rural Galway then working in a Dublin hotel. Barnacle followed Joyce across Europe, bore their children, inspired his literary talent, and eventually became his wife. Joyce and Barnacle lived for several years in the Italian port city of Trieste, then in Zurich and Rome. But the two are best known for their time in Paris, where Joyce became one of the most renowned expatriate writers of the so-called Lost Generation. In 1914, Joyce published his first book, Dubliners, a collection of 15 short stories. Two years later, he completed his first novel, Portrait of the Artist as a Young Man. While not a major commercial success, the book caught the attention of the American poet, Ezra Pound, then living in London. During this time, Joyce also began writing Ulysses.

      The single day depicted in the novel, June 16, 1904, was the day that Joyce and Barnacle first met. Although there may have been single-day novels before Ulysses, “no one thought of a day as an epic. Joyce was planning to turn a single day into a recursive unit of dazzling complexity in which the circadian part was simultaneously the epochal whole. A June day in Dublin would be a fractal of Western civilization” (p.55). The idea of Homeric correspondences and embedding references to the Odyssey into early 20th century Dublin may seem “indulgent,” Birmingham writes, yet Joyce executed it “so subtly that the novel can become a scavenger hunt for pedants . . . Some allusions are so obscure that their pleasure seems to reside in their remaining hidden” (p.130-31).

     In the early 20th century, censors sought to ban obscene works in part to protect the sensibilities of women and children, especially in large urban centers like London and New York. It is thus ironic that strong and forward- minded women are central to Birmingham’s story, standing behind Joyce and assuming the considerable risks which the effort to publish Ulysses entailed. The first two, Americans Margaret Anderson and Jane Heap, were co-publishers of an avant-garde magazine, The Little Review, an “unlikely product of Wall Street money and Greenwich Village bohemia” (p.7-8), and one of several small, “do-it-yourself” magazines which Birmingham describes as “outposts of modernism” (p.71). From London, Erza Pound linked Joyce to Anderson and Heap, and The Little Review began to publish Ulysses in 1918 in serial form.

      In 1921, New York postal authorities sought to confiscate portions of Ulysses published in The Little Review under the authority of the Comstock Act, an 1873 statute that made it a crime, punishable by up to ten years in prison and a $10,000 fine, to utilize the United States mail to distribute or advertise obscene, lewd or lascivious materials. The Comstock Act adopted the “Hicklin rule” for determining obscenity, a definition from an 1868 English case, Regina v. Hicklin: “whether the tendency of the matter charged as obscenity is to deprave and corrupt those whose minds are open to such immoral influences and into whose hands a publication of this sort may fall” (p.168).

     The Hicklin rule’s emphasis upon “tendency” to deprave and corrupt defined obscenity by a work’s potential effects on “society’s most susceptible readers – anyone with a mind ‘open’ to ‘immoral influences.’ . . . Lecherous readers and excitable teenage daughters could deprave and corrupt the most sophisticated literary intent” (p.168). The Hicklin rule further permitted judges to look at individual words or passages without considering their place in the work as a whole and without considering the work’s artistic or literary value. Finding that portions of Ulysses under review were obscene under the Hicklin rule, a New York court sentenced Anderson and Heap to 10 days in prison or $100 fines. The Post Office sent seized copies of The Little Review to the Salvation Army, “where fallen women in reform programs were instructed to tear them apart” (p.197). The court’s decision served as a ban on publication and distribution of Ulysses in the United States for another 10 years.

     The court’s decision also highlighted the paradoxical role of the Post Office in the early 20th century. Although the postal service “made it possible for avant-garde texts to circulate cheaply and openly to wherever their kindred readers lived,” it was also the institution that could “inspect, seize and burn those texts” (p.7). Moreover, government suppression of sexually explicit material in the United States during and immediately after World War I shaded into its efforts to stamp out political radicalism. Ulysses encountered obstacles to publication in the United States not so much because “vigilantes were searching for pornography but because government censors in the Post Office were searching for foreign spies, radicals and anarchists, and it made no difference if they were political or philosophical or if they considered themselves artists” (p.109).

     Meanwhile, in Great Britain, Harriet Shaw Weaver, a “prim London spinster” (p.12) published Ulysses in serial form in a similarly obscure London publication, The Egoist, also supported by Erza Pound. After Leonard and Virginia Woolf refused to publish Ulysses in Britain, Weaver imported a full version of the novel from France. In 1923, Sir Archibald Bodkin, head of the Crown Prosecution Service, concluded that Ulysses was “filthy” and that “filthy books are not allowed to be imported into this country” (p.253; Bodkin also vigorously prosecuted war resisters during World War I, as discussed in Adam Hochschild’s To End All Wars: A Story of Loyalty and Rebellion, reviewed here in November 2014). Sir Archibald’s ruling authorized British authorities to seize and burn in the “King’s Chimney” 500 copies of Ulysses coming from France.

      The copies subject to Bodkin’s ruling had been printed at the behest of Sylvia Beach, the American expatriate who founded the iconic Parisian bookstore Shakespeare & Company, a “hybrid space, something between an open café and an ensconced literary salon” (p.150), and a home away from home for Joyce, the young Ernest Hemmingway, and other members of the Lost Generation of expatriate writers. After Beach became the first to publish Ulysses in book form in 1922, she went on to publish eight editions of the novel and Shakespeare & Company “became a pilgrimage destination for budding Joyceans, several of whom asked Miss Beach if they could move to Paris and work for her” (p.260).

     Over the next decade, Joyce’s novel became an “underground sensation” (p.3), banned implicitly in the United States and explicitly in Great Britain. Editions of Ulysses were smuggled from France into the United States, often through Canada. The book was “literary contraband, a novel you could read only if you found a copy counterfeited by literary pirates or if you smuggled it past customs agents” (p.3). Throughout the decade, Joyce’s health deteriorated appreciably. He had multiple eye problems and, despite numerous ocular surgeries – described in jarringly gruesome detail here — he lost his sight. He also contracted syphilis. By the mid-1920s, Birmingham writes, Joyce was “already an old man. The ashplant cane that he had used for swagger as a young bachelor in Dublin became a blind man’s cane in Paris. Strangers helped him cross the street, and he bumped into furniture as he navigated through his own apartment” (p.289).

* * *

     In 1932, Beach relinquished her claims for royalties from Ulysses.  The upcoming New York publishing firm, Random House, under its ambitious young owner Bennett Cerf, then signed a contract with Joyce for publication and distribution rights in the United States, even though the 1921 court decision still served as a ban on distribution of the novel. To formulate a test case, Random House’s attorney, Morris Ernst, a co-founder of the American Civil Liberties Union, almost begged Customs inspectors to confiscate a copy of Ulysses. Initially, an inspector responded that “everybody brings that [Ulysses] in. We don’t pay attention to it” (p.306).  But the book was seized and, some seven months later, the United States Attorney in New York brought a case for forfeiture and confiscation under a statute that allowed an action against the book itself, rather than its publishers or importers. The United States Attorney instituted the test case in the fall of 1933, a few short months after the first book burnings in Nazi Germany.

     The case was assigned to Judge John Woolsey, a direct descendant of the 18th century theologian Jonathan Edwards. Ernst sought to convince Judge Woolsey that the first amendment to the United States Constitution should serve to protect artistic as well as political expression and that the Hicklin rule should be discarded. Under Ernst’s argument, Ulysses merited first amendment protection as a serious literary work, “’too precious’ to be sacrificed to unsophisticated readers” (p.320). Ernst went on to contend that obscenity was a “living standard.” Even if Ulysses had been obscene at the time The Little Review excerpts had been condemned a decade earlier, it could still be protected expression in 1933, given the vast changes in public morality standards since The Little Review ruling.

     Unlike the judges who had considered The Little Review excerpts, Judge Woolsey  took the time to read the novel and ended up agreeing with Ernst. He found portions of the book “disgusting” with “many words usually considered dirty.” But he found nothing that amounted to “dirt for dirt’s sake” (p.329). Rather, each word of the book:

contributes like a bit of mosaic to the detail of the picture which Joyce is seeking to construct for his readers. . . when such a great artist in words, as Joyce undoubtedly is, seeks to draw a true picture of the lower middle class in a European city, ought it to be impossible for the American public legally to see that picture? (p.329).

Answering his question in the negative, Judge Woolsey ruled that Joyce’s novel was not obscene and could be admitted into the United States.

     A three-judge panel of the Second Circuit Court of Appeals affirmed Judge’s Woolsey’s decision, 2-1. The majority consisted of two of the most renowned jurists of the era, Learned Hand, who had been pushing for a more modern definition of obscenity for years; and his cousin, Augustus Hand, who wrote the majority opinion.  Once the appeals court issued its decision, Cerf inserted Judge Woolsey’s decision into the Random House printings of the novel, making it arguably the most widely distributed judicial opinion in history.  Two years later, the trial and appellate court decisions in the United States influenced Britain to abandon the 1868 Hicklin rule. Obscenity in Britain would no longer be a matter of identifying a book’s tendency to deprave and corrupt. Rather, the government must “consider intent and context – the character of a book was all contingent” (p.336).

     United States vs. One Book Called Ulysses established a test for determining whether a work is obscene and thus outside the protection of the first amendment, that, in somewhat modified form, still applies today in the United States.  This test requires a court to consider: (1) the literary worth of the work as a whole, not just selected excerpts; (2) the effect on an average reader, rather than an overly sensitive one; and (3) evolving contemporary community standards.  The decision, Birmingham argues, removed “all barriers to art” and led to “unfettered freedom of artistic form, style and content – literary freedoms that were as political as any speech protected by the First Amendment” (p.11).

* * *

     It is an open question whether Birmingham’s book will inspire readers who have not yet read Joyce’s masterwork to do so. But even those reluctant to undertake Joyce’s work should appreciate Birmingham’s account of how forward-minded early 20th century publishers and members of the literary world schemed to bring Ulysses to the light of day; and how judicial standards evolved to allow room for literary works treating human sexuality candidly and openly.

Thomas H. Peebles
Silver Spring, Maryland
July 29, 2016

9 Comments

Filed under American Society, History, Literature

Turning the Ship of Ideas in a Different Direction

Judt.1

Judt.2

Tony Judt, When the Facts Change,

Essays 1995-2010 , edited by Jennifer Homans

      In a 2013 review of Rethinking the 20th Century, I explained how the late Tony Judt became my “main man.” He was an expert in the very areas of my greatest, albeit amateurish, interest: French and European 20th century history and political theory; what to make of Communism, Nazism and Fascism; and, later in his career, the contributions of Central and Eastern European thinkers to our understanding of Europe and what he often termed the “murderous” 20th century. Moreover, Judt was a contemporary, born in Great Britain in 1948, the son of Jewish refugees. Raised in South London and educated at Kings College, Cambridge, Judt spent time as a recently-minted Cambridge graduate at Paris’ fabled Ecole Normale Supérieure; he lived on a kibbutz in Israel and contributed to the cause in the 1967 Six Day War; and had what he termed a mid-life crisis, which he spent in Prague, learning the Czech language and absorbing the rich Czech intellectual and cultural heritage.  Judt also had several teaching stints in the United States and became an American citizen. In 1995, he founded the Remarque Institute at New York University, where he remained until he died in 2010, age 62, of amyotrophic lateral sclerosis, ALS, which Americans know as “Lou Gehrig’s Disease.”

.
      Rethinking the 20th Century was more of an informal conversation with Yale historian Timothy Snyder than a book written by Judt. Judt’s best-known work was a magisterial history of post-World War II Europe, entitled simply Post War. His other published writings included incisive studies of obscure left-wing French political theorists and the “public intellectuals” who animated France’s always lively 20th century debate about the role of the individual and the state (key subjects of Sudhir Hazareesingh’s How the French Think: An Affectionate Portrait of an Intellectual People, reviewed here in June).  Among French public intellectuals, Judt reserved particular affection for Albert Camus and particular scorn for Jean-Paul Sartre.  While at the Remarque Institute, Judt became himself the epitome of a public intellectual, gaining much attention outside academic circles for his commentaries on contemporary events.  Judt’s contributions to public debate are on full display in When the Facts Change, Essays 1995-2010, a collection of 28 essays edited by Judt’s wife Jennifer Homans, former dance critic for The New Republic.

      The collection includes book reviews and articles originally published elsewhere, especially in The New York Review of Books, along with a single previously unpublished entry. The title refers to a quotation which Homans considers likely apocryphal, attributed to John Maynard Keynes: “when the facts change, I change my mind – what do you do, sir” (p.4). In Judt’s case, the major changes of mind occurred early in his professional life, when he repudiated his youthful infatuation with Marxism and Zionism. But throughout his adult life and especially in his last fifteen years, Homans indicates, as facts changed and events unfolded, Judt “found himself turned increasingly and unhappily against the current, fighting with all of his intellectual might to turn the ship of ideas, however slightly, in a different direction” (p.1).  While wide-ranging in subject-matter, the collection’s entries bring into particularly sharp focus Judt’s outspoken opposition to the 2003 American invasion of Iraq, his harsh criticism of Israeli policies toward its Palestinian population, and his often-eloquent support for European continental social democracy.

* * *

      The first essay in the collection, a 1995 review of Eric Hobsbawm’s The Age of Extremes: A History of the World, 1914-1991, should be of special interest to tomsbooks readers. Last fall, I reviewed Fractured Times: Culture and Society in the Twentieth Century, a collection of Hobsbawm’s essays.  Judt noted that Hobsbawm had “irrevocably shaped” all who took up the study of history between 1959 and 1975 — what Judt termed the “Hobsbawm generation” of historians (p.13). But Judt contended that Hobsbawm’s relationship to the Soviet Union — he was a lifelong member of Britain’s Communist Party – clouded his analysis of 20th century Europe. The “desire to find at least some residual meaning in the whole Communist experience” explains what Judt found to be a “rather flat quality to Hobsbawm’s account of the Stalinist terror” (p.26). That the Soviet Union “purported to stand for a good cause, indeed the only worthwhile cause,” Judt concluded, is what “mitigated its crimes for many in Hobsbawm’s generation.” Others – likely speaking for himself — “might say it just made them worse” (p.26-27).

      In the first decade of the 21st century, Judt became known as an early and fervently outspoken critic of the 2003 American intervention in Iraq.  Judt wrote in the New York Review of Books in May 2003, two months after the U.S.-led invasion, that President Bush and his advisers had “[u]nbelievably” managed to “make America seem the greatest threat to international stability.” A mere eighteen months after September 11, 2001:

the United States may have gambled away the confidence of the world. By staking a monopoly claim on Western values and their defense, the United States has prompted other Westerners to reflect on what divides them from America. By enthusiastically asserting its right to reconfigure the Muslim world, Washington has reminded Europeans in particular of the growing Muslim presence in their own cultures and its political implications. In short, the United States has given a lot of people occasion to rethink their relationship with it” (p.231).

Using Madeline Albright’s formulation, Judt asked whether the world’s “indispensable nation” had miscalculated and overreached. “Almost certainly” was his response to his question, to which he added: “When the earthquake abates, the tectonic plates of international politics will have shifted forever” (p.232). Thirteen years later, in the age of ISIS, Iranian ascendancy and interminable civil wars in Iraq and Syria, Judt’s May 2003 prognostication strikes me as frightfully accurate.

      Judt’s essays dealing with the state of Israel and the seemingly intractable Israeli-Palestinian conflict generated rage, drawing in particular the wrath of pro-Israeli American lobbying groups. Judt, who contributed to Israeli’s war effort in the 1967 Six Day War as a driver and translator for the Iraqi military, came to consider the state of Israel an anachronism. The idea of a Jewish state, in which “Jews and the Jewish religion have exclusive privileges from which non-Jewish citizens are forever excluded,” he wrote in 2003, is “rooted in another time and place” (p.116). Although “multi-cultural in all but name,” Israel was “distinctive among democratic states in its resort to ethno-religious criteria with which to denominate and rank its citizens” (p.121).

      Judt noted in 2009 that the Israel of Benjamin Netanyahu was “certainly less hypocritical than that of the old Labor governments. Unlike most of its predecessors reaching back to 1967, it does not even pretend to seek reconciliation with the Arabs over which it rules” (p. 157-58). Israel’s “abusive treatment of the Palestinians,” he warned, is the “chief proximate cause of the resurgence of anti-Semitism worldwide. It is the single most effective recruiting agent for radical Islamic movements” (p.167). Vilified for these contentions, Judt repeatedly pleaded for recognition of what should be, but unfortunately is not, the self-evident proposition that one can criticize Israeli policies without being anti-Semitic or even anti-Israel.

      Judt was arguably the most influential American proponent of European social democracy, the form of governance that flourished in Western Europe between roughly 1950 and 1980 and became the model for Eastern European states emerging from communism after 1989, with a strong social safety net, free but heavily regulated markets, and strong respect for individual liberties and the rule of law. Judt characterized social democracy as the “prose of contemporary European politics” (p.331). With the fall of communism and the demise of an authoritarian Left, the emphasis upon democracy had become “largely redundant,” Judt contended. “We are all democrats today. But ‘social’ still means something – arguably more now than some decades back when a role for the public sector was uncontentiously conceded by all sides” (p.332). Judt saw social democracy as the counterpoint to what he termed “neo-liberalism” or globalization, characterized by the rise of income inequality, the cult of privatization, and the tendency – most pronounced in the Anglo-American world – to regard unfettered free markets as the key to widespread prosperity.

      Judt asked 21st century policy makers to take what he termed a “second glance” at how “our twentieth century predecessors responded to the political challenge of economic uncertainty” (p.315). In a 2007 review of Robert Reich’s Supercapitalism: The Transformation of Business, Democracy, and Everyday Life, Judt argued that the universal provision of social services and some restriction upon inequalities of income and wealth are “important economic variables in themselves, furnishing the necessary public cohesion and political confidence for a sustained prosperity – and that only the state has the resources and the authority to provide those services and enforce those restrictions in our collective name” (p.315).  A second glance would also reveal that a healthy democracy, “far from being threatened by the regulatory state, actually depends upon it: that in a world increasingly polarized between insecure individuals and unregulated global forces, the legitimate authority of the democratic state may be the best kind of intermediate institution we can devise” (p.315-16).

      Judt’s review of Reich’s book anticipated the anxieties that one sees in both Europe and America today. Fear of the type last seen in the 1920s and 1930s had remerged as an “active ingredient of political life in Western democracies” (p.314), Judt observed one year prior to the economic downturn of 2008.  Indeed, one can be forgiven for thinking that Judt had the convulsive phenomena of Brexit in Britain and Donald Trump in the United States in mind when he emphasized how fear had woven itself into the fabric of modern political life:

Fear of terrorism, of course, but also, and perhaps more insidiously, fear of uncontrollable speed of change, fear of the loss of employment, fear of losing ground to others in an increasingly unequal distribution of resources, fear of losing control of the circumstances and routines of one’s daily life.  And perhaps above all, fear that it is not just we who can no longer shape our lives but that those in authority have lost control as well, to forces beyond their reach.. . This is already happening in many countries: note the arising attraction of protectionism in American politics, the appeal of ‘anti-immigrant parties across Western Europe, the calls for ‘walls,’ ‘barriers,’ and ‘tests’ everywhere (p.314).

       Judt buttressed his case for social democracy with a tribute to the railroad as a symbol of 19th and 20th century modernity and social cohesion.  In essays that were intended to be part of a separate book, Judt contended that the railways “were and remain the necessary and natural accompaniment to the emergence of civil society. They are a collective project for individual benefit. They cannot exist without common accord . . . and by design they offer a practical benefit to individual and collectivity alike” (p.301). Although we “no longer see the modern world through the image of the train,” we nonetheless “continue to live in the world the trains made.”  The post-railway world of cars and planes, “turns out, like so much else about the decades 1950-1990, to have been a parenthesis: driven, in this case, by the illusion of perennially cheap fuel and the attendant cult of privatization. . . What was, for a while, old-fashioned has once again become very modern” (p.299).

      In a November 2001 essay appearing in The New York Review of Books, Judt offered a novel interpretation of Camus’ The Plague as an allegory for France in the aftermath of German occupation, a “firebell in the night of complacency and forgetting” (p.181).  Camus used The Plague to counter the “smug myth of heroism that had grown up in postwar France” (p.178), Judt argued.  The collection concludes with three Judt elegies to thinkers he revered, François Furet, Amos Elon, and Lesek Kołakowski, a French historian, an Israeli writer and a Polish communist dissident, representing key points along Judt’s own intellectual journey.

***

      The 28 essays which Homans has artfully pieced together showcase Judt’s prowess as an interpreter and advocate – as a public intellectual — informed by his wide-ranging academic and scholarly work.  They convey little of Judt’s personal side.  Readers seeking to know more about Judt the man may look to his The Memory Chalet, a memoir posthumously published in 2010. In this collection, they will find an opportunity to savor Judt’s incisive if often acerbic brilliance and appreciate how he brought his prodigious learning to bear upon key issues of his time.

Thomas H. Peebles
La Châtaigneraie, France
July 6, 2016

3 Comments

Filed under American Politics, European History, France, French History, History, Intellectual History, Politics, Uncategorized, United States History, World History