Honest Broker

 

 

Michael Doran, Ike’s Gamble:

America’s Rise to Dominance in the Middle East 

 

       On July 26, 1956, Egypt’s President Gamal Abdel Nasser stunned the world by announcing the nationalization of the Suez Canal, a critical conduit through Egypt for the transportation of oil between the Mediterranean Sea and the Indian Ocean. Constructed between 1859 and 1869, the canal was owned by the Anglo-French Suez Canal Company. What followed three months later was the Suez Crisis of 1956: on October 29, Israeli brigades invaded Egypt across its Sinai Peninsula, advancing to within ten miles of the canal.  Britain and France, following a scheme concocted with Israel to retake the canal and oust Nasser, demanded that both Israeli and Egyptian troops withdraw from the occupied territory. Then, on November 5th, British and French forces invaded Egypt and occupied most of the Canal Zone, the territory along the canal. The United States famously opposed the joint operation and, through the United Nations, forced Britain and France out of Egypt.  Nearly simultaneously, the Soviet Union ruthlessly suppressed an uprising in Hungary.

       The autumn of 1956 was thus a tumultuous time. Across the globe, it was a time when colonies were clamoring for and achieving independence from former colonizers, and the United States and the Soviet Union were competing for the allegiance of emerging states in what was coming to be known as the Third World.  In the volatile and complex Middle East, it was a time of rising nationalism. Nasser, a wildly ambitious general who came to power after a 1952 military coup had deposed the King of Egypt, aspired to become not simply the leader of his country but also of the Arab speaking world, even the entire Muslim world.  By 1956, Nasser had emerged as the region’s most visible nationalist. But he was far from the only voice in the Middle East seeking to speak for Middle East nationalism. Syria, Jordan, Lebanon and Iraq were also imbued with the rising spirit of nationalism and saw Nasser as a rival, not a fraternal comrade-in-arms.

       Michael Doran’s Ike’s Gamble: America’s Rise to Dominance in the Middle East provides background and context for the United States’ decision not to support Britain, France and Israel during the 1956 Suez crisis. As his title suggests, Doran places America’s President, war hero and father figure Dwight D. Eisenhower, known affectionately as Ike, at the middle of the complicated Middle East web (although Nasser probably merited a place in Doran’s title: “Ike’s Gamble on Nasser” would have better captured the spirit of the narrative). Behind the perpetual smile, Eisenhower was a cold-blooded realist who was “unshakably convinced” (p.214) that the best way to advance American interests in the Middle East and hold Soviet ambitions in check was for the United States to play the role of an “honest broker” in the region, sympathetic to the region’s nationalist aspirations and not too closely aligned with its traditional allies Britain and France, or with the young state of Israel.

       But Doran, a senior fellow at the Hudson Institute and former high level official at the National Security Council and Department of Defense in the administration of George W. Bush, goes on to argue that Eisenhower’s vision of the honest broker – and his “bet” on Nasser – were undermined by the United States’ failure to recognize the “deepest drivers of the Arab and Muslim states, namely their rivalries with each other for power and authority” (p.105). Less than two years after taking Nasser’s side in the 1956 Suez Crisis, Eisenhower seemed to reverse himself.  By mid-1958, Doran reveals, Eisenhower had come to regret his bet on Nasser and his refusal to back Britain, France and Israel during the crisis. Eisenhower kept this view largely to himself, however, distorting the historical picture of his Middle East policies.

        Although Doran considers Eisenhower “one of the most sophisticated and experienced practitioners of international politics ever to reside in the White House,” the story of his relationship with Nasser is at bottom a lesson in the “dangers of calibrating the distinction between ally and enemy incorrectly” (p.13).  Or, as he puts it elsewhere, Eisenhower’s “bet” on Nasser’s regime is a “tale of Frankenstein’s monster, with the United States as the mad scientist and the new regime as his uncontrollable creation” (p.10).

* * *

      The “honest broker” approach to the Middle East dominated the Eisenhower administration from its earliest days in 1953. Eisenhower, his Secretary of State John Foster Dulles, and most of their key advisors shared a common picture of the volatile region. Trying to wind down a war in Korea they had inherited from the Truman Administration, they considered the Middle East the next and most critical region of confrontation in the global Cold War between the Soviet Union and the United States.  As they saw it, in the Middle East the United States found itself caught between Arabs and other “indigenous” nationalities on one side, and the British, French, and Israelis on the other. “Each side had hold of one arm of the United States, which they were pulling like a tug rope. The picture was so obvious to almost everyone in the Eisenhower administration that it was understood as an objective description of reality” (p.44). It is impossible, Doran writes, to exaggerate the “impact that the image of America as an honest broker had on Eisenhower’s thought . . . The notion that the top priority of the United States was to co-opt Arab nationalists by helping them extract concessions – within limits – from Britain and Israel was not open to debate. It was a view that shaped all other policy proposals” (p.10).

         Alongside Ike’s “bet” on Nasser, the book’s second major theme is the deterioration of the famous “special relationship” between Britain and the United States during Eisenhower’s first term, due in large measure to differences over Egypt, the Suez Canal, and Nasser (and, to quibble further with the book’s title, “Britain’s Fall from Power in the Middle East” in my view would have captured the spirit of the narrative better than “America’s Rise to Dominance in the Middle East”).  The Eisenhower administration viewed Britain’s once mighty empire as a relic of the past, out of place in the post World War II order. It viewed Britain’s leader, Prime Minister Winston Churchill, in much the same way. Eisenhower entered his presidency convinced that it was time for Churchill, then approaching age 80, to exit the world stage and for Britain to relinquish control of its remaining colonial possessions – in Egypt, its military base and sizeable military presence along the Suez Canal.

      Anthony Eden replaced Churchill as prime minister in 1955.  A leading anti-appeasement ally of Churchill in the 1930s, by the 1950s Eden shared Eisenhower’s view that Churchill had become a “wondrous relic” who was “stubbornly clinging to outmoded ideas” (p.20) about Britain’s empire and its place in the world.  Although interested in aligning Britain’s policies with the realities of the post World War II era, Eden led the British assault on Suez in 1956.  With  “his career destroyed” (p.202), Eden was forced to resign early in 1957.

       If the United States today also has a “special relationship” with Israel, that relationship had yet to emerge during the first Eisenhower term.  Israel’s circumstances were of course entirely different from those of Britain and France, a young country surrounded by Arab-speaking states implacably hostile to its very existence. President Truman had formally recognized Israel less than a decade earlier, in 1948.  But substantial segments of America’s foreign policy establishment in the 1950s continued to believe that such recognition had been in error. Not least among them was John Foster Dulles, Eisenhower’s Secretary of State.  There seemed to be more than a whiff of anti-Semitism in Dulles’ antagonism toward Israel.

        Describing Israel as the “darling of Jewry throughout the world” (p.98), Dulles decried the “potency of international Jewry” (p.98) and warned that the United States should not be seen as a “backer of expansionist Zionism” (p.77).  For the first two years of the Eisenhower administration, Dulles followed a policy designed to “’deflate the Jews’ . . . by refusing to sell arms to Israel, rebuffing Israeli requests for security guarantees, and diminishing the level of financial assistance to the Jewish state” (p.99).   Dulles’ views were far from idiosyncratic. Israel “stirred up deep hostility among the Arabs” and many of America’s foreign policy elites in the 1950s ”saw Israel as a liability” (p.9). Without success, the United States sought Nasser’s agreement to an Arab-Israeli accord which would have required limited territorial concessions from Israel.

       Behind the scenes, however, the United States brokered a 1954 Anglo-Egyptian agreement, by which Britain would withdraw from its military base in the Canal Zone over an 18-month period, with Egypt agreeing that Britain could return to its base in the event of a major war. Doran terms this Eisenhower’s “first bet” on Nasser. Ike “wagered that the evacuation of the British from Egypt would sate Nasser’s nationalist appetite. The Egyptian leader, having learned that the United States was willing and able to act as a strategic partner, would now keep Egypt solidly within the Western security system. It would not take long before Eisenhower would come to realize that Nasser’s appetite only increased with eating” (p.67-68).

        As the United States courted Nasser as a voice of Arab nationalism and a bulwark against Soviet expansion into the region, it also encouraged other Arab voices. In what the United States imprecisely termed the “Northern Tier,” it supported security pacts between Turkey and Iraq and made overtures to Egypt’s neighbors Syria and Jordan. Nasser adamantly opposed these measures, considering them a means of constraining his own regional aspirations and preserving Western influence through the back door.  The “fatal intellectual flaw” of the United States’ honest broker strategy, Doran argues, was that it “imagined the Arabs and Muslims as a unified bloc. It paid no attention whatsoever to all of the bitter rivalries in the Middle East that had no connection to the British and Israeli millstones. Consequently, Nasser’s disputes with his rivals simply did not register in Washington as factors of strategic significance” (p.78).

           In September 1955, Nasser shocked the United States by concluding an agreement to buy arms from the Soviet Union, through Czechoslovakia, one of several indications that he was at best playing the West against the Soviet Union, at worst tilting toward the Soviet side.  Another came in May 1956, when Egypt formally recognized Communist China. In July 1956, partially in reaction to Nasser’s pro-Soviet dalliances, Dulles informed the Egyptian leader that the United States was pulling out of a project to provide funding for a dam across the Nile River at Aswan, Nasser’s “flagship development project . . . [which was] expected to bring under cultivation hundreds of thousands of acres of arid land and to generate millions of watts of electricity” (p.167).

         Days later, Nasser countered by announcing the nationalization of the Suez Canal, predicting that the tolls collected from ships passing through the canal would pay for the dam’s construction within five years. Doran characterizes Nasser’s decision to nationalize the canal as the “single greatest move of his career.” It is impossible to exaggerate, he contends, the “power of the emotions that the canal takeover stirred in ordinary Egyptians. If Europeans claimed that the company was a private concern, Egyptians saw it as an instrument of imperial exploitation – ‘a state within a state’. . . [that was] plundering a national asset for the benefit of France and Britain” (p.171).

            France, otherwise largely missing in Doran’s detailed account, concocted the scheme that led to the October 1956 crisis.  Concerned that Nasser was providing arms to anti-French rebels in Algeria, France proposed to Israel what Doran terms a “stranger than fiction” (p.189) plot by which the Israelis would invade Egypt. Then, in order to protect shipping through the canal, France and Britain would:

issue an ultimatum demanding that the belligerents withdraw to a position of ten miles on either side of the canal, or face severe consequences. The Israelis, by prior arrangement, would comply. Nasser, however, would inevitably reject the ultimatum, because it would leave Israeli forces inside Egypt while simultaneously compelling Egyptian forces to withdraw from their own sovereign territory. An Anglo-French force would then intervene to punish Egypt for noncompliance. It would take over the canal and, in the process, topple Nasser (p.189).

The crisis unfolded more or less according to this script when Israeli brigades invaded Egypt on October 29th and Britain and France launched their joint invasion on November 5th. Nasser sunk ships in the canal and blocked oil tankers headed through the canal to Europe.

         Convinced that acquiescence in the invasion would drive the entire Arab world to the Soviet side in the global Cold War, the United States issued measured warnings to Britain and France to give up their campaign and withdraw from Egyptian soil. If Nasser was by then a disappointment to the United States, Doran writes, the “smart money was still on an alliance with moderate nationalism, not with dying empires” (p.178). But when Eden telephoned the White House on November 7, 1956, largely to protest the United States’ refusal to sell oil to Britain, Ike went further. In that phone call, Eisenhower as honest broker “decided that Nasser must win the war, and that he must be seen to win” (p.249).  Eisenhower’s hardening toward his traditional allies a week into the crisis, Doran contends, constituted his “most fateful decision of the Suez Crisis: to stand against the British, French, and Israelis in [a] manner that was relentless, ruthless, and uncompromising . . . [Eisenhower] demanded, with single-minded purpose, the total and unconditional British, French, and Israeli evacuation from Egypt. These steps, not the original decision to oppose the war, were the key factors that gave Nasser the triumph of his life” (p.248-49).

        When the financial markets caught wind of the blocked oil supplies, the value of the British pound plummeted and a run on sterling reserves ensued. “With his currency in free fall, Eden became ever more vulnerable to pressure from Eisenhower. Stabilizing the markets required the cooperation of the United States, which the Americans refused to give until the British accepted a complete, immediate, and unconditional withdrawal from Egypt” (p.196). At almost the same time, Soviet tanks poured into Budapest to suppress a burgeoning Hungarian pro-democracy movement. The crisis in Eastern Europe had the effect of “intensifying Eisenhower’s and Dulles’s frustration with the British and the French. As they saw it, Soviet repression in Hungary offered the West a prime opportunity to capture the moral high ground in international politics – an opportunity that the gunboat diplomacy in Egypt was destroying” (p.197). The United States supported a United Nations General Assembly resolution calling for an immediate ceasefire and withdrawal of invading troops. Britain, France and Israel had little choice bu to accept these terms in December 1956.

       In the aftermath of the Suez Crisis, the emboldened Nasser continued his quest to become the region’s dominant leader. In February 1958, he engineered the formation of the United Arab Republic, a political union between Egypt and Syria that he envisioned as the first step toward a broader pan-Arab state (in fact, the union lasted only until 1961). He orchestrated a coup in Iraq in July 1958. Later that month, Eisenhower sent American troops into Lebanon to avert an Egyptian-led uprising against the pro-western government of Christian president Camille Chamoun. Sometime in the period between the Suez Crisis of 1956 and the intervention in Lebanon in 1958, Doran argues, Eisenhower withdrew his bet on Nasser, coming to the view that his support of Egypt during the 1956 Suez crisis had been a mistake.

        The Eisenhower of 1958 “consistently and clearly argued against embracing Nasser” (p.231).  He now viewed Nasser as a hardline opponent of any reconciliation between Arabs and Israel, squarely in the Soviet camp. Eisenhower, a “true realist with no ideological ax to grind,” came to recognize that his Suez policy of “sidelining the Israelis and the Europeans simply did not produce the promised results. The policy was . . . a blunder” (p.255).   Unfortunately, Doran argues, Eisenhower kept his views to himself until well into the 1960s and few historians picked up on his change of mind. This allowed those who sought to distance United States policy from Israel to cite Eisenhower’s stance in the 1956 Suez Crisis, without taking account of Eisenhower’s later reconsideration of that stance.

* * *

      Doran relies upon an extensive mining of diplomatic archival sources, especially those of the United States and Great Britain, to piece together this intricate depiction of the Eisenhower-Nasser relationship and the 1956 Suez Crisis. These sources allow Doran to emphasize the interactions of the key actors in the Middle East throughout the 1950s, including personal animosities and rivalries, and intra-governmental turf wars.  He writes in a straightforward, unembellished style. Helpful subheadings within each chapter make his detailed and sometimes dense narrative easier to follow. His work will appeal to anyone who has worked in an Embassy overseas, to Middle East and foreign policy wonks, and to general readers with an interest in the 1950s.

Thomas H. Peebles

Saint Augustin-de-Desmaures

Québec, Canada

June 19, 2017

9 Comments

Filed under American Politics, British History, Uncategorized, United States History, World History

Ineffective Peace Treaty

MagnaCarta

Dan Jones, Magna Carta:

The Birth of Liberty 

 

            The Magna Carta, a document dating from 1215 — a mere 802 years ago – is now regarded as the foundation for some of the most enduring Anglo-American liberties, among them trial by jury; the right of habeas corpus; the principle of no taxation without representation; and the notion that the king is subject to and not above the law.  Grandiose terms such as “due process of law” and the “rule of law” are regularly traced to the Great Charter. Yet, when we look at the charter from the perspective of 1215, we see a markedly different instrument: an ineffective peace treaty designed to end civil war between a loathsome English king and rebellious barons that brought about almost no cessation of hostilities; and a compact that, within a few short weeks of its execution, was condemned by the Pope in the strongest terms, when he threatened both sides with excommunication from the Catholic Church if they sought to observe or enforce its terms.

            Dan Jones’ Magna Carta: The Birth of Liberty seeks to capture the perspective and spirit of 1215. In this compact, easy-to-read volume, Jones, a British historian and journalist who has published extensively on the Middle Ages, takes his readers back to the late 12th and 13th centuries to show the origins and immediate after effects of the Great Charter. To this story, fascinating in itself, Jones adds much rich detail about life in England and on the European continent during the Middle Ages — for kings and barons, to be sure, but also for everyday folks, those without titles of nobility. In Jones’ interpretation, the Magna Carta was the product of a struggle for control of the 13th century English feudal order between three institutions: the crown, the nobility, and the Catholic Church.

           The key characters in Jones’ story are King John I — “bad King John,” as I remember him described in school; approximately 200 barons, England’s’ most powerful nobles who, upon condition of pledging loyalty to the king, ruled over wide stretches of the realm like miniature kings; and Pope Innocent III, in Jones’ view one of the greatest medieval popes, a “reformer, a crusader, and a strict clerical authoritarian” (p.41) with an unbending belief in papal supremacy that was bound to clash with the expansive notions of royal prerogative which John entertained.  Yet, the two headstrong personalities enjoyed a brief period of collaboration that led directly to the Great Charter.

* * *

        Jones rejects recent attempts of historians to rehabilitate John’s reputation. “Bad King John” seems to summarize well who John was: a “cruel and unpleasant man, a second-rate soldier . . . slippery, faithless, interfering, [and] uninspiring . . . not a man who was considered fit for kingship” (p.28-29). Born in 1166, John was the youngest of five sons of the first of England’s Plantagenet kings, King Henry II, and Duchess Eleanor of Aquitaine. Of the five sons, only John and his brother Richard survived to adulthood. Richard, known as “Richard the Lionhearted” for his “peerless brilliance as a military leader” (p.24), succeeded his father as king in 1189.

        Neither Henry nor Richard spent much time in England. Both were busy fighting adversaries in France and acquiring lands in Brittany, Normandy, and Western France.  Richard was also involved both in the Third Crusade to the Holy Land and in wars elsewhere on the European continent. On his deathbed, Henry learned that his son John had joined some of his leading French adversaries in plotting against his father. John repeated his treachery during his brother Richard’s reign: he provoked conflict with Richard’s royal administrators while his brother was away, attempting to seize control of government for himself. Without children, Richard died in battle in France in 1199 and John inherited the English throne.

         John began his reign fighting wars on several fronts in France. Within five years of his accession, he had lost “virtually the whole Continental empire that had been so painstakingly assembled and defended by his father and his brother” (p.33) — not without reason was he known as “John Lackland.” But John “never gave up believing that he was obliged – perhaps even destined – to one day return to the lands he had lost and reclaim them” (p.38). As he devoted the better part of ten years to reclaiming lost French lands, John needed to raise huge revenues. Wars in those days, as in ours, were expensive undertakings.

         John was relentless in exploiting familiar sources of revenue and spotting new ones. He sold immunity from lawsuits and charged aristocratic widows vast sums to forego his right to subject such women to forced marriage. He expanded the lands deemed royal forests, and imposed substantial fines on those who sought to hunt or collect firewood on them. He levied punitive taxes on England’s Jews. None of these measures were wholesale innovations, Jones indicates. What made John different was the “sheer scale and relentlessness with which he bled his realm. Over the course of his reign his average annual income was . . . far higher than [what] either his father or his brother had ever achieved” (p.38).

       But John was most ruthless in imposing taxes and fees upon England’s 200 or so barons, who officially held their land at the pleasure of the king. Pledging loyalty to the king and paying taxes and fees to him permitted a baron to live, literally, like a king in a castle, surrounded by servants who worked in the castle, knights who pledged loyalty to the baron, and serfs who tilled nearby land. Beyond basic rent, the barons were subject to a wide range of additional payments to the king: inheritance taxes, fees for the king’s permission to marry, and payments to avoid sending a baron’s knights to fight in the royal army, known as “scutage,” one of the most contentious sources of friction between John and the barons. John “deliberately pushed numerous barons to the brink of bankruptcy, a state in which they became highly dependent on royal favor” (p.50).

            As tensions between king and barons mounted over John’s “pitilessly efficient legal and a financial administration” (p.53), John also challenged the authority of the Catholic Church, the “ultimate guarantor” in 13th century England of the “spiritual health of the realm” (p.45).  In 1206, John found himself in direct confrontation with the church’s head in Rome, Pope Innocent III.  John objected vehemently to Innocent’s appointment of Stephen Langton as Archbishop of Canterbury, an instance of an on-going struggle over ecclesiastical appointments, in which kings claimed the right to appoint bishops in their kingdoms and popes resisted acknowledging any such right. Langton’s potentially seditious ideas alarmed John. The pope’s nominee condemned the “avarice . . . of modern kings” and criticized those who “collect treasure not in order that they may sustain necessity, but to satiate their cupidity” (p.40-41).

            To impede Langton’s appointment, John seized lands belonging to the Archbishop of Canterbury.  Innocent retaliated by placing an interdict upon England, forbidding most church services, a severe sentence on all of John’s subjects, placing in peril England’s “collective soul” (p.45). Later, the Pope excommunicated John — the ultimate 13th century sanction that a pope could impose upon an earthly being, foreclosing heaven’s everlasting grace and exposing the hapless soul to eternal damnation.  The stalemate ended in 1213 when John, facing the threat of an invasion from France, agreed to accept Langton as Archbishop, pledged obedience to the Pope and the Catholic Church, and vowed to lead a crusade to the Holy Land. Through this “astonishing volte-face,” John could henceforth claim “special protection from all his enemies as a personal vassal of the pope” (p.57).  For his part, Innocent had shown “remarkable moral flexibility,” blending seamlessly the “Christian principle of forgiving one’s enemies with a willingness to consort with almost anyone who he thought could help him achieve his heartfelt desire to smite the Muslims of the Middle East” (p.89).

          Having made his peace with Rome, John pursued his quest to retake previously lost lands in France.  He suffered a humiliating loss  in 1214 at the town of Bouvines in Northern France to forces aligned with French King Phillip Augustus.  After this catastrophic debacle, and with his “foreign policy and military reputation now severely tarnished,” John returned to England to find the “chorus of baronial anger at his high-handed brand of kingship louder than ever” (p.63). A group of barons, but perhaps not a majority, formally renounced their fealty to John, thereby “declaring themselves free to make war upon him” (p.105). Having “unilaterally defied their lord and freed themselves from the feudal oath on which their relationship and the whole of the structure of society depended,” the barons were henceforth “outlaws, rebels, and enemies of the realm” (p.105).  John’s kingdom was “teetering dangerously on the brink of civil war. It was a war he could neither avoid nor afford to pursue” (p.63).

       In a mutinous spirit, the barons demanded that John confirm the Charter of Liberties, a proclamation issued by King Henry I more than a century earlier, in 1100, that had sought to bind the King to certain laws regarding the treatment of nobles, church officials, and individuals.   John’s response was to hold a council in London in January 1215 to discuss potential reforms with the barons. Both sides appealed for assistance to Pope Innocent III. John’s reconciliation with the pope two years earlier turned out to be a “political masterstroke” (p.58). The Pope squarely took John’s side in the dispute, providing him with a key bargaining edge.

          From late May into the early days of June 1215, messengers traveled back and forth between the king and the rebel barons. Slowly but surely they began to feel out the basis for an agreement, with Archbishop Langton playing a key role as mediator. By June 10, 1215, the outlines of an agreement had taken detailed form, and John was ready to meet his rebellious barons in person. The meeting took place at Runnymede, a meadow in Surrey on the River Thames, about 20 miles west of London, a traditional meeting point where opposing sides met to work out differences on neutral ground. General agreement was reached on June 15, 2015.  Four days later, the barons formally renewed their oaths of loyalty to John and official copies of the charter were issued.

* * *

         The bargain at Runnymede was essentially an exchange of peace to benefit the king, for which the barons gained confirmation of many long-desired liberties. The Runnymede charter was “much longer, more detailed, more comprehensive, and more sophisticated than any other statement of English law or custom that had ever been demanded from a King of England” (p.141). The written document was not initially termed “Magna Carta”; that would come two years later.  It consisted of 4.000 words, in continuous Latin text, without divisions. Subsequently the text was sub-divided into 63 clauses. “Read in sequence,” Jones writes, the 63 clauses “feel like a great jumble of issues and statements that at times barely follow one from the other.  Taken together, however, they form a critique of almost every aspect of Plantagenet kingship in general and the rule of John in particular” (p.133).

          Buried deep in the document were Clauses 39 and 40. Clause 39 declared: “No free man is to be arrested or imprisoned or disseized, or outlawed or exiled, or in any other way ruined, nor will we go or send against him, except by the legal judgment of his peers or by the law of the land” (p.138). Clause 40 stipulates: “to no one will we sell, to no one will we deny or delay right or justice” (p.138-39). More than any other portions of the charter, these two clauses constitute the reason why the Magna Carta remained consequential over the course of the following eight centuries. The two clauses enshrine, Jones states, the “basic idea that justice should always restrain the power of government” (p.139). They contain in embryo form the modern notions of due process of law and judgment by equals.

     But these clauses were far from priorities for either side. The charter’s first substantive clause affirmed that “the English Church shall be free” (p.134), a clause inserted at Archbishop Langton’s urging to limit the king from interfering in church appointments. Although the Magna Carta is “often thought to be a document concerned with the secular rights of subjects or citizens,” in 1215 its religious considerations were “given pride of place” (p.135).   Subsequent clauses restrained the king’s right to impose taxes upon the barons.

      The charter explicitly limited the authority of the Exchequer – the king’s treasury, the “most important institution of royal government” (p.14-15) – to impose inheritance taxes, so that it could no longer “extort, bully and ruin anyone whom the king happened merely to dislike” (p.136). Scutage, the tax exacted as an alternative to service in the king’s armies, was to be imposed only after taking the “common counsel of the realm” (p.136), foreshadowing the notion developed later in the 13th and 14th centuries that taxes could be imposed only after formal meetings between the king and his subjects.

        Clause 61, known as the “security clause,” was arguably of greatest importance to the barons. It established a panel of 25 specially elected barons empowered to hold John to his word. If John were to “transgress against any of the articles of peace” (p.140), the clause entitled the barons to renounce their loyalty to the king and take appropriate action, including taking the king’s castles, lands and possessions.  The security clause was the first mechanism in English history to allow the “community of the realm to override the king’s authority when that authority was abused” (p.140). More bluntly, if John were to backslide on his obligations under the charter, the clause explicitly “allowed for licensed civil war” (p.140).

        Other clauses in the charter regulated bridge building; banned fish traps; established uniform weights and measures for corn, cloth, and ale; and reversed the expansion of royal forests that had taken place during John’s reign. There was also, Jones writes, much in the Magna Carta that remained “vague, woolly, or fudged. In places the document feels like frustratingly unfinished business” (p.139). Yet, beneath the host of details and specificities of the charter, Jones sees two simple ideas. The first was that the English barons could conceive of themselves as a community of the realm – a group with “collective rights that pertained to them en masse rather than individually.” Even more fundamentally, although the king still made the law, he explicitly recognized in the charter that he had a duty to “obey [the law] as well” (p.141).

* * *

          The weeks that followed the breakup of the meeting at Runnymede were, as Jones puts it “messy and marked by increasing distrust” (p.143). In the immediate aftermath of the charter’s confirmation, John was flooded with demands that he return land and castles he had confiscated in previous years. Prior to the end of June 1215, John was forced to make fifty such restorations to rebel barons. Seeing little advantage to the peace treaty he had agreed to, John convoked another meeting with the barons in July at Oxford. There, he sought a supplementary charter in which the barons would acknowledge that they were “’bound by oath to defend him and his heirs ‘in life and limb’” (p.144). When the barons refused, John wrote to Pope Innocent III asking him to annul the Great Charter and release him from his oath to obey it.

           Writing back with the “righteous anger that he could summon better than any man in Europe”(p.144), Innocent more than complied with John’s request. In words that left little room for interpretation, Innocent declared the charter “null, and void of all validity forever” (p.145). Under threat of excommunication, Innocent enjoined John from observing the document and the barons from insisting upon its observance. By the end of September 1215, roughly 100 days after its execution, the Magna Carta was, Jones writes, “certifiably dead” (p.145).

* * *

          But the Great Charter did not remain dead.  Although civil war between John and the barons erupted anew in the autumn of 1215, the charter received new life with the deaths of the story’s two protagonists the following year: Innocent died in July 1216 and John in October of that year.   After John’s death, the charter evolved from a peace treaty imposed by the king’s enemies to an “offering by the king’s friends, designed to demonstrate voluntarily the commitment of the new regime to govern by principles on which the whole realm could agree” (p.184-85). For the rest of the thirteenth century, the Magna Carta was “reconfirmed and reissued at moments of political instability or crisis” (p.184-85). Even where its specific clauses grew irrelevant and obsolete, “much importance was still attached to the idea of the Magna Carta as a bargaining chip, particularly in relation to taxation” (p.186-87).   By the end of the 13th century, a peace treaty that lasted just a few weeks more than eight decades earlier had become the “founding stone of the whole system of English law and government” (p.189-190).

       Into this story of political intrigue and civil conflict, Jones weaves detailed descriptions of everyday life in early 13th century England: for example, what Christmas and Easter celebrations entailed; the tenuous lives of serfs; and how life in London, already England’s largest city, differed from that in the rest of the realm.   These passages enliven Jones’ study of the charter’s origins and immediate afterlife.

           In a final chapter, Jones fast forwards several centuries, discussing briefly the Great Charter’s long afterlife: its influence on the rebellion against the Stuart kings in 17th century England, culminating in the Glorious Revolution of 1688; how the charter underlay the rebellion of England’s American colonies during the following century; and its continued resonance in modern times.  The charter’s afterlife, Jones writes, is the story of its myth and symbolism becoming “almost wholly divorced from its original history” (p.5).  Jones’ lucid and engrossing work constitutes an invaluable elaboration of the charter’s original history, reminding us of the unpromising early 13th century environment from which it emerged to become one of the most enduring documents of liberal democracy.

Thomas H. Peebles

La Châtaigneraie, France

May 23, 2017

 

 

 

 

 

5 Comments

Filed under British History, English History, Rule of Law

Managing Winston

Clementine.1

Clementine.2

Sonia Purnell, Clementine:

The Life of Mrs. Winston Churchill 

            Biographies of political spouses run the risk of being overwhelmed by the politician once he or she enters the scene. Sonia Purnell’s Clementine: The Life of Mrs. Winston Churchill, by far the most comprehensive biography to date of Winston Churchill’s wife Clementine, does not quite succumb to that risk.  But Purnell, a freelance British journalist and historian, provides a fresh look at the familiar ups and downs in Winston’s career, recounting them from Clementine’s perspective, from the time the couple first met in 1904 and married in 1908 through Winston’s death in 1965.  Although comprehensive in its cradle-to-grave coverage of Clementine herself, the book shines in its treatment of the couple during World War II.  When Winston became Britain’s wartime Prime Minister in 1940, Clementine functioned as her husband’s closest advisor. She was, Purnell writes, Winston’s “ultimate authority, his conscience and the nearest he had to a direct line to the people.”  Without Clementine sharing his burden, “it is difficult if not impossible to imagine [Winston] becoming the single-minded giant who led Britain, against almost impossible odds, to victory over tyranny” (p.391).

            But if World War II was the couple’s own “finest hour,” to borrow from Winston’s famous speech to Parliament in June 1940, many of the qualities that enabled them to survive and thrive during that trial can be traced to the testing they received during World War I.  War, it seems, served as the force that bound their marriage together.  We know a great deal about the workings of that marriage because the couple spent an extraordinary amount of time apart from one another. They corresponded regularly when separated, and even communicated frequently in writing when they were together under the same roof. By one count, the couple sent about 1,700 letters, notes and telegrams back and forth over the course of nearly six decades of courtship and marriage, many of which survive.

          The Churchills’ correspondence and the other portions of the record that Purnell has skillfully pieced together reveal a marriage that had its share of difficult moments, bending but never breaking. Both spouses had volatile and frequently volcanic personalities.  Although her husband was known for his bouts of depression, referred to informally as “Black Dog,” Clementine had an actual case of clinically diagnosed depression, and more than her fair share of mood swings and temperamental outbursts. Further, both spouses were surprisingly indifferent parents, more devoted to each other than to their children. Clementine, tormented that Winston might abandon her as her father had abandoned her mother, clearly placed Winston’s needs over those of her children. Yet, on more than one occasion she seems to have contemplated leaving the marriage.  Nonetheless, over the course of 57 years, the marital glue held.

* * *

         Clementine, born in 1885, had an unorthodox upbringing. Her mother, Lady Blanche Hozier, of aristocratic origin but limited means, was trapped in a bad marriage to Colonel Henry Hozier, who left his wife and children during Clementine’s early childhood. To this day, historians debate whether Hozier was indeed Clementine’s biological father, and the matter is unlikely ever to be settled conclusively. Clementine’s two sisters, Kitty and Nellie, may have been her half sisters – their paternity has not been conclusively established either. After Colonel Hozier’s departure, the three girls lived a peripatetic life with Lady Blanche, who took her children frequently to Northern France and allowed herself to be pursued by a wide number of suitors. Kitty seemed to be her mother’s favorite among the three daughters, but she died a month before her 17th birthday and her mother “was never the same again” (p.21). Lady Blanche never provided Clementine with a steady, loving childhood, a loss which likely affected Clementine’s subsequent relationships with her own children.

         Clementine was first introduced to rising political star Winston Churchill at a society ball in the summer of 1904, when she was 18 and he was 29.  She was far from impressed with the “notorious publicity seeker” (p.29) who had recently defected from the Conservative Party to join the upstart Liberal Party over his opposition to a Conservative proposal to impose protective tariffs on goods imported into Britain.  Inexplicably, the usually gregarious and supremely self-confident young man clammed up, unable to make the requisite small talk. The next encounter occurred four years later, in 1908, when Clementine happened to be seated next to Winston at a dinner party. This time, Clementine “found his idealism and brilliance liberating” (p.31).  Winston was impressed that Clementine, herself more mature at age 22, knew “far more about life than the ladies of cosseting privilege he normally met, and she was well educated, sharing his love of France and its culture” (p.31). After a courtship conventionally aristocratic, if short, the couple married later that year (the courtship, marriage and Winston’s early political years, from 1900 to 1915, are the subject matter of Michael Seldin’s Young Titan, reviewed here in May 2015).

            The marriage was “never destined to be smooth” (p.54), Purnell writes. The man Clementine married was “demanding, selfish and rash” (p.54), emotionally needy, lacking in empathy, and a workaholic with a tendency to bully.  But Clementine could be “rigid and unforgiving” (p.4) and brought an “explosive temper” to the marriage, where the “slightest setback, such as cold soup or a late delivery, could send her into a fury” (p.53). Plagued throughout life by a pattern of “severe listlessness alternating with near-hysterical outbursts” (p.148), Clementine, not Winston, had the couple’s only case of clinically diagnosed depression. Throughout their first three decades of marriage, the couple was united in the goal of making Winston Prime Minister. But they pursued this goal at no small cost to their offspring.

            Between 1909 and 1922, the couple had five children, four daughters and one son. Daughter Marigold, born in 1918, died at an early age. The four surviving offspring — Diana, b.1909; Randolph, b.1911; Sarah, b.1914; and Mary, b.1922 – “saw little of either parent, even by the standards of British upper-class families of the period” (p.184). Winston outwardly adored his children. He gave them silly nicknames and, when available, enjoyed playing games and roughhousing with them. But he was only infrequently available.  Clementine in this account seemed to lack even this level of intimacy. She was distant and not particularly warm with any of her children, and also frequently absent, either traveling with her husband or away on recurring travel and adventures on her own.

          Randolph, Diana and Sarah went on to lead turbulent adult lives. Randolph drank heavily, gambled frequently and acquired a reputation for boorish behavior.  One of the book’s most surprising – indeed stunning – episodes occurred during his 1939 marriage to Pamela Digby, later Pamela Harrington. It was not a good marriage. Randolph was abusive in many ways, physically and otherwise.  In their troubled  marriage, Randolph’s parents plainly sided with their daughter-in-law over their son. After war broke out, with Randolph serving in the army and the couple living apart, Pamela pursued affairs with several leading figures from the United States, including famed journalist Edward R. Murrow and wealthy businessman Averill Harriman, whom she later married.

            In Purnell’s account, both Winston, by then Britain’s wartime Prime Minister, and Clementine encouraged these romantic liaisons for their intelligence gathering potential in furtherance of the war effort. Pamela “fast became one of the most important intelligence brokers in the war” (p.275).   She provided information to her parents-in-law on “what the Americans were thinking” (p.274) and boosted Britain’s case for more American assistance.  Randolph never forgave his parents for condoning the liaisons, and it is not difficult to understand why. Randolph died of a heart attack in 1968, at age 57.

            Randolph’s sisters Diana and Sarah also struggled through adult life.  Diana had two bad marriages and suffered repeatedly from nervous breakdowns.  She likely took her own life from an overdose of barbiturates in 1963, at age 54.  Sarah had a moderately successful acting career, but was plagued throughout much of her adult life by alcohol abuse, “drinking herself to her grave by slow stages” (p.387). She married three times. Her termination of an affair with American Ambassador John Winant likely contributed to his suicide in 1947. With Sarah on the brink of filing for divorce from her second husband, he too committed suicide. Sarah died in 1982, five years after Clementine, at age 68.

            Only the youngest Churchill, Mary, “always the perfect daughter” (p.387), achieved something akin to normalcy as an adult.  She married but once, had five children, served in numerous public organizations, and wrote the first (and seemingly only other) biography of her mother.  In the 1960s, she was quoted as saying that, based on her own childhood experience, she “made a conscious decision to put my children first because I did feel something had been. . . yes, missing at home” (p.359).  Alone among the Churchill children, Mary lived to an old age, dying in 2014 at age 92.

            Purnell documents several points between the two wars, and after World War II, when Clementine appeared to be on the brink of exiting the marriage.  Bitter rows between the parents over Randolph’s behavior as a young adult led in the 1930s to hints that the Churchills’ “ever more regular separations might become permanent” (p.196). After the war, perfect daughter Marry sought to mediate the couple’s differences.  Worried that her parents’ marriage again seemed on the verge of falling apart, Mary acknowledged her mother’s “occasional yearning for ‘the quieter more banal happiness of being married to an ordinary man’” (p.354).

          Another sign of the marriage’s sometimes fragile character came in the 1930s, when Clementine, traveling without her husband on a four-month cruise of the East Indies, fell under the charms of Terence Philip, an art dealer with a reputation for “passing flirtations” (p.203).  Phillip was “tall, rich, suave, an authority on art and unburdened by driving ambition – unlike Winston, in fact, in almost every respect” (p.201). It is unclear whether Clementine’s relationship with Phillip was adulterous. Phillip was “thought not to be that interested in women sexually. . . Nevertheless his open and ardent admiration shook Clementine to her core” (p.203-04). Purnell also describes an incident where Winston was invited to take tea with his cousin’s fiancée, only to learn upon arrival at her apartment that the barely clad woman had a purpose other than tea in mind for his visit.  Upon discovering that purpose, Winston “insisted he had left immediately” and recounted the incident to Clementine, who “appears to have been surprisingly relaxed about the encounter” (p.132).

* * *

            Purnell neatly weaves these soap opera details of the Churchill family into the familiar story of Winston pursuing his political ambitions and the less familiar story of Clementine playing an indispensable role in that pursuit. Shortly after the couple’s marriage, Winston became Home Secretary, charged with keeping internal order in the country.  In 1911, he was appointed First Lord of the Admiralty, head of Britain’s Royal Navy, and held this position when Britain found itself at war in 1914.  In this capacity, he oversaw the failed 1915 attack on Ottoman Turkey at the Dardanelles straights, a calamitous failure for which Winston became the scapegoat, “held liable for one of the bloodiest British military failures in history” (p.81). Purnell suggests that Winston’s marriage saved him from self-destruction at the time of this grim setback. Only Clementine “could repeatedly tell him why he was deemed untrustworthy and why he had made so many enemies”(p.118).

             With Clementine’s support, Winston slowly crept back into politics. He lost his seat as a Liberal Member of Parliament in 1922. At a time when the Liberal Party was fading into irrelevance, he rejoined the Conservative Party in 1924, becoming Chancellor of the Exchequer. In that capacity, he oversaw Britain’s return in 1926 to the gold standard, another decision that proved disastrous for him politically, resulting in deflation and unemployment and leading to the General Strike of 1926. With the defeat of the Conservative government in 1929, Winston was out of politics and entered what he later termed his “Wilderness Years.” In the 1920s, he had earned a reputation as somewhat of a crank, railing incessantly about the Bolshevik menace to Europe.  In the 1930s, he shifted his rhetorical target to Germany and the threat that Adolph Hitler’s Nazi party posed, which the public perceived initially as little more than another example of his crankiness. But in May 1940, Winston became his country’s Prime Minister, charged with leading the war against Nazi Germany which had broken out the previous September.  Winston and Clementine’s “true life’s work” then began,  and she “would barely leave his side again until it was done” (p.234).

            By the time Winston became Prime Minister, Clementine was already an “amalgam of special advisor, lobbyist and spin doctor” — or, as David Lloyd George put it, an “expert at ‘managing’ Winston” (p.94). At each juncture in Winston’s career, Clementine developed an “astute judgment of the characters involved, the goals that were achievable and the dangers to be anticipated” (p.57). She closely reviewed drafts of Winston’s speeches and coached him on effective delivery techniques.   Campaigning for his seat in Parliament bored Winston, and he frequently sent Clementine to rouse his constituents as elections approached.   In a time before political optics and images were given over to full-time professionals, Clementine was Winston’s optics specialist. With her “surer grasp of the importance of public image” (p.3), she frequently raised questions that the more impulsive Winston hadn’t fully thought through about how a course of action would look to the voters or be perceived internationally.

            During World War II, Clementine assumed an unprecedented role as Winston’s aide.  It is unlikely, Purnell contends, that “any other prime ministerial spouse in British history has been so involved in government business, or wielded such personal power – albeit entirely behind the scenes.  She did not duplicate what Winston was doing, or cross it; she complemented it and he gave her free rein to do so” (p.246-47).  When Winston was in Teheran in December 1943 meeting with Roosevelt and Stalin, for instance, Clementine was busy putting out fires and easing tensions within Winston’s cabinet.  At the same time, she “reviewed reports on parliamentary debates, read the most secret telegrams, kept [Opposition leader and Deputy Prime Minister] Clement Attlee informed of the prime minister’s progress, dealt with constituency matters, and sent back to Winston digests of public reaction to the war “(p.314).

          Yet, paradoxically, Winston and Clementine did not see eye-to-eye on many of issues of their time, with Clementine’s instincts conspicuously more liberal than those of her husband.  Despite her aristocratic background and lofty position as a politician’s’wife, Clementine was unusually adept at establishing links and relations with average citizens. Her relatively impoverished childhood and limited work experience while unmarried “fostered in Clementine an instinctive sympathy for the worker’s point of view” (p.103).  Even before World War I, she was a fervent advocate of women’s voting rights, “just the first of many issues on which she would part ways with her husband’s more conservative political views” (p.56). Later she would champion co-education at Cambridge University’s Churchill College and abolition of the death penalty.

          During World War II, Clementine frequently visited injured military personnel and otherwise sought out everyday citizens to encourage them to continue to support the war effort.  She also prevailed upon her husband to create opportunities for women to serve in auxiliary military roles. Winston was “initially unenthusiastic at the idea . . . but Clementine persevered and he became one of the first to appreciate that the country could not win through the sacrifice of its menfolk alone” (p.241).

         A tale within the tale of World War II is Clementine’s relationship with American First Lady Eleanor Roosevelt. The two met on several occasions during the war. Clementine did not care for Eleanor’s husband Franklin, who had taken the unpardonable liberty of calling her “Clemmie,” a “privilege normally reserved for the most deserving and long-serving friends” (p.310); and there was no love lost between Winston and Eleanor.  Eleanor felt Winston “romanticized war” (p.281), while Winston found Eleanor to be a busybody “who did not conform to [his] ideas of an ‘attractive’ woman” (p.285).  Nonetheless, the two women “enjoyed each other’s company” (p.296).  They were of a similar age and upper class backgrounds, and each had endured a difficult childhood.  Both demonstrated uncommon concern for the poor and their countries’ least favored citizens.  Each lost a child as a young mother, and had children who struggled through adult life.  Purnell notes that the four Roosevelt sons racked up 18 marriages between them, while Clementine’s four children blundered through a mere eight.

          But the Roosevelts were living almost entirely separate lives during World War II, with Eleanor reduced to the role of a second-tier political advisor, in the dark on most of the key war issues that her husband was dealing with.  She sometimes criticized or questioned her husband’s decisions or policies in a newspaper column she wrote. Such public airing of differences between Clementine and Winston was unthinkable for either spouse.  As Purnell notes, Clementine “never even hinted publicly about her private disagreements with Winston. But then [unlike Franklin Roosevelt] he kept nothing from her” (p.306).

          Roosevelt died in April 1945, less than a month prior to the end of Europe’s most devastating war.  A few short months later, Winston, himself in poor health, saw his Conservative party voted out of office, as Clement Atlee and his Labour Party won a general election in July 1945.  Improbably, Winston returned at age 77 as Prime Minister to lead the Conservatives from 1951 to 1955, his final and generally unsatisfactory years as government leader.  He remained a Member of Parliament until the October 1964 general election, and died just months later in January 1965.

* * *

         Purnell ends her substantive chapters with Winston’s death, covering Clementine’s final years as a widow, up to her death in 1977 at age 92, in an “Epilogue.” This was a period of “almost ethereal calm” (p.387) for her.  With Randolph’s death in 1968, she had outlived three of her five children. Her husband’s towering reputation across the globe was secure and, as Purnell puts it, “if her light was fading, so be it” (p.388).  Purnell’s thoroughly researched and highly readable work constitutes a major step in assuring that Clementine’s light continues to shine.

Thomas H. Peebles

La Châtaigneraie, France

May 4, 2017

 

9 Comments

Filed under Biography, British History, English History, History, Uncategorized

Living Philosophy

 

 

Sarah Bakewell, At the Existentialist Café:

Freedom, Being, and Apricot Cocktails 

            Sarah Bakewell’s At the Existentialist Café: Freedom, Being, and Apricot Cocktails takes a deep but refreshingly casual look at the philosophical way of thinking termed existentialism, giving the term an historical treatment grounded in the actual lives of existentilist philosophers.  Part philosophy, part history, part biography, her  work is also part autobiographical.  Bakewell, a British writer and teacher who is the author of a highlyacclaimed book on Montaigne, endearingly details her own journey in learning about existentialism and explains how major existential writings influenced her personally.  Philosophy, she contends,  “becomes more interesting when it is cast into the form of a life.” Likewise, “personal experience is more interesting when thought about philosophically” (p.32).  Quite so.

More than just about any other form of philosophy, existentialism cannot really be understood without digging into the day-to-day lives of existential philosophers themselves. The existentialist, Bakewell emphasizes, seeks to capture the “quality of experience as we live it rather than according to the frameworks suggested by traditional philosophy, psychology, Marxism, Hegelianism, structuralism, or any of the other –isms and disciplines that explain our lives away” (p.325). Bakewell acknowledges that existentialism is difficult to define more precisely, with no consensus definition. For some, it is “more of a mood than a philosophy” (p.1). Her definition is itself a page long, and she invites her readers to skip over it.

At the risk of oversimplifying a complex and elusive term, existentialism in Bakewell’s interpretation might best be thought of as a way of thinking about existence for human beings. It focuses upon how humans  live the moments large and small in the time allotted to them, i.e., how they exist. Humans are unique beings in that they are free to choose how they live and are responsible for their choices, but only within what Bakewell describes as a “situation,” which includes a person’s own biology and psychology as well as the “physical, historical and social variables” of each human being’s situation.  The existentialist therefore sees human existence,  Bakewell emphasizes, as  “ambiguous: at once boxed in by borders and yet transcendent and exhilarating” (p.34).

Bakewell’s hardcopy cover features sketches of four individuals: Jean Paul Sartre and Simon de Beauvoir at the center, flanked by Albert Camus on their left and Maurice Merleau-Ponty to their right. Sartre and Beauvoir are not only at the center of the cover: they are also the center of Bakewell’s story, occupying the main table at her Existentialist Café, a “big, busy café of the mind” (p.33). Existentialism is above all the story of Sartre and Beauvoir, philosophy’s ultimate power couple, defined by their writings and their lives. Because Sartre and Beauvoir famously lived those lives in Paris, the story’s main setting is France and the Parisian intellectual milieu from the late 1920s until Sartre’s death in 1980 and Beauvoir’s six years later (almost to the day), in 1986.

The Existentialist Café is thus a Parisian café, probably located somewhere on the Boulevard St. Germain in Paris’ 6th arrondissement, much like the actual cafés where Sartre and Beauvoir wrote, drank, met friends and acquaintances, and thrashed out their existential ideas over the course of a half-century. Sartre and Beauvoir became a couple in 1929, when they were 23 and 21 respectively. From the beginning, their relationship was explicitly open-ended, allowing both partners to pursue amorous digressions. But their relationship was also what Bakewell terms a “philosophical demonstration of existentialism in practice, defined by the two principles of freedom and companionship” (p120). Although the bourgeois ideal of marriage held no appeal for either, their “shared memories, observations and jokes bound them together just as in any long marriage” (p.120).

Camus and Merleau-Ponty, not quite existentialists in the sense that Bakewell uses the term, were Sartre and Beauvoir’s contemporaries who drank frequently with them and thought, wrote and argued – often vehemently — about many of the ideas that animated Sartre and Beauvoir.  Merleau-Ponty, far less well known than Camus, Sartre and Beauvoir, merits a full chapter in Bakewell’s work, part of her effort to introduce him to English language readers. Camus and Merleau-Ponty both had fallings out with Sartre and Beauvoir, partially over Cold War political differences and partially because Sartre’s outsized personality led naturally to fallings out with just about everyone he befriended, save Beauvoir. Camus and Merleau-Ponty’s fluctuating relationships with Sartre and Beauvoir constitute one of the book’s two main threads.

The other is the influence exerted upon the couple  by Edmund Husserl and Martin Heidegger, Germans of an older generation associated with an approach to philosophy termed phenomenology, existentialism’s direct antecedent. Heidegger, infamous for embracing Nazism in the 1930s and remaining steadfastly unrepentant thereafter, is a brooding, almost villainous presence throughout Bakewell’s study — a scary guy when he drops in at the Existentialist Café, unlikely to be telling many jokes. Some of the 20th century’s foremost thinkers, writers and intellectuals also make short appearances at Bakewell’s café, including Karl Jaspers, Hannah Arendt, Arthur Koestler, Richard Wright and James Baldwin.

* * *

            Existentialism may be a difficult term to define, but its origins are easy to pinpoint in Bakewell’s account: a conversation during the 1932-33 Christmas holiday season, involving Sartre, Beauvoir and Raymond Aron, Sartre’s classmate at France’s renowned Ecole Normale Superièure.  The conversation  took place at Paris’ Bac-de-Gaz café on Boulevard Montparnasse, about a mile from the Boulevard St. Germain cafés Beauvoir and Sartre later made famous.  Sartre, 27, and Beauvoir, 25, were then teaching high school in separate locations in Normandy and were back home in Paris enjoying the holiday break. Aron had just returned from studying philosophy in Berlin, a city then on edge, with Adolph Hitler’s unruly National Socialist party enjoying a surge in representation in Weimar Germany’s Parliament. The three 20 somethings exchanged banter and the latest gossip as they drank apricot cocktails, the Bac-de-Gaz’s specialty.

Aron recounted to his friends his discovery in Berlin of phenomenology, then considered a new approach to philosophy.  He explained how eminent philosophers  Husserl and  Heidegger were turning away from the often-contorted abstractions of traditional philosophy to concentrate on things as they are – being was the key word. Husserl and Heidegger were asking questions such as: what is it for a thing to be? What does it mean to say you are? Looking at the apricot cocktails on the table, Aron told his friends, “If you are a phenomenologist, you can talk about this cocktail and make philosophy out of it!” (p.3). Although Sartre and Beauvoir were familiar with the works of Husserl and Heidegger, in Bakewell’s account this moment at the Montparnasse café was an epiphany for both, the moment when the approach to the philosophy hat we now call existentialism came into being.  Together, over the course of nearly a half-century, Sartre and Beauvoir went  on to transform some of the basic ideas of phenomenology into their own distinct way of thinking.

Sartre subsequently studied  in Germany under Husserl. But the roots of existentialism in Bakewell’s interpretation may be found even further back than Heidegger and Husserl, in the work of 19th century philosophers Friedrich Nietzsche and Søren Kierkegaard. The “heralds of modern existentialism,” Nietzsche and Kierkegaard “pioneered a mood of rebellion and dissatisfaction, created a new definition of existence as choice, action and self-assertion, and made a study of the anguish and difficulty of life. They also worked in the conviction that philosophy was not just a profession. It was life itself – the life of an individual” (p.20).

20th century phenomenology built upon and systematized Nietzsche and Kierkegaard’s iconoclastic way of thinking. It sought, as Bakewell puts it, to give a “formal mode of access to human experience,” allowing philosophers to “talk about life more or less as non-philosophers do, while still being able to tell themselves they are being methodological and rigorous” (p.43). This mode of access to human experience flourished amidst the turmoil of post-World War I Germany under Hussserl,  considered to be the “father” of phenomenology, and Heidegger, Husserl’s student and subsequently his colleague at the University of Freiburg.  For Husserl, phenomenology meant “stripping away distractions, habits, clichés of thought, presumptions and received ideas, in order to return to what he called the ‘things themselves’” (p.40). As Hitler’s virulent form of xenophobic nationalism took hold in Germany, Husserl, born into a Jewish family, sought to retain the Enlightenment spirit of shared reason and free inquiry (p.132). He died in 1938 at age 79.

Heidegger took phenomenology in a different direction in the 1930s. His appeal to students was that he sought nothing less than to “overturn human thinking, destroy the history of metaphysics, and start philosophy all over again” (p.62). His writings revealed a yearning to go back “into the deep forest, into childhood innocence and into the dark waters from which the first swirling chords of thought had stirred. Back . . . to a time when societies were simple, profound and poetic” (p.131). Heidegger urged his students to exercise   “vigilance,” to transcend the human tendency to become stuck in habits, received ideas, and a narrow-minded attachment to possessions.

But vigilance for Heidegger in Hitler’s Germany “did not mean calling attention to Nazi violence, to the intrusion of state surveillance, or to the physical threats to his fellow humans. It meant being decisive and resolute in carrying through the demands history was making upon Germany, with its distinctive Being and destiny. It meant getting in step with the chosen hero” (p.87). Heidegger “set himself against the philosophy of humanism, and he himself was rarely humane in his behavior” (p.320), Bakewell contends. She notes an instance where Heidegger went out of his way late in life to welcome the Jewish poet and concentration camp survivor Paul Celan to Freiburg. Bakewell terms this the “single documented example” she found in her research of Heidegger “actually doing something nice” (p.304-05).

Sartre was hardly more likeable — “monstrous . . . self-indulgent, demanding [and] bad tempered” (p.321-22). But behind these less commendable qualities, Bakewell finds an endearing man with powerful ideas bursting out “on all sides with energy, peculiarity, generosity and communicativeness” (p.322). Unlike Heidegger, Sartre “moved ever forwards, always working out new (often bizarre) responses to things, or finding ways of reconciling old ideas with fresh input. . .  He was always thinking ‘against himself,’” and he “followed Husserl’s phenomenological command by exploring whatever topic seemed most difficult at each moment” (p.322). Freedom became the great subject of Sartre’s philosophy during the Nazi occupation, central to almost everything he wrote from that point onward.

The connection between description and freedom  fascinated Sartre. “A writer is a person who describes, and thus a person who is free – for a person who can exactly describe what he or she experiences can also exert some control over those events. Sartre explored this link between writing and freedom again and again in his work” (p.104).   Bakewell is impressed by Sartre’s radical atheism, so different from that professed by Heidegger, who “abandoned his faith only in order to pursue a more intense form of mysticism.  Sartre was a profound atheist, and a humanist to his bones. He outdid even Nietzsche in his ability to live courageously and thoughtfully in the conviction that nothing lies beyond, and that no divine compensations will ever make up for anything on this earth.” For Sartre, Bakewell writes with emphasis, “this life is what we have, and we must make of it what we can” (p.323).

Beauvoir in Bakewell’s view was a better fiction writer than Sartre, exploring in her writings how the forces of constraint and freedom play themselves out in everyday lives. One of the 20th century’s “greatest intellectual chroniclers” (p.326), with a “genius for being amazed by the world” (p.109), Beauvoir is best known today for her landmark 1949 feminist tract, The Second Sex, a work “revolutionary in every sense”(p.208) which addressed the “complex territory where free choice, biology and social and cultural factors meet and mingle” (p.226).

How to be a woman was for Beauvoir the “existentialist problem par excellence” (p.215).  Bakewell terms The Second Sex a “confident experiment in what we might call ‘applied existentialism,’” in which Beauvoir “used philosophy to tackle two huge subjects: the history of humanity – which she reinterpreted as a history of patriarchy – and the history of an individual woman’s whole life as it plays itself out from birth to old age” (p.208). The Second Sex in Bakewell’s view is the “single most influential work ever to come out of the existentialist movement” (p.210).

Left-wing politics were a huge part of the existentialist agenda for both Sartre and Beauvoir, with Sartre the more overtly political.  Sartre was never a Communist party member, and his relationship to communism is not the mirror image of Heidegger and Nazism. But Sartre adopted some outlandish left-wing ideas.  He embraced anti-colonialist Franz Fanon’s rejection of Gandhi’s notion of non-violent change, considering violence essential to political progress.  His embrace,  Bakewell writes, was so enthusiastic that he “outdid the original, shifting the emphasis so as to prize violence for its own sake. Sartre seemed to see the violence of the oppressed as a Nietzchean act of self-creation. Like Fanon, he also contrasted it with the hidden brutality of colonialism” (p.274).

Sartre was the direct target of Raymond Aron’s classic 1955 work, The Opium of the Intellectuals, in which his Ecole Normale classmate accused Sartre of being “merciless towards the failings of the democracies but ready to tolerate the worse crimes as long as they are committed in the name of proper doctrines” (p. 266).   Sartre was troubled by the 1956 Soviet invasion of Hungary but it was not until the Soviet invasion of Czechoslovakia in 1968 during the “Prague Spring” that he definitively rejected the Soviet model, “only to praise people like Mao Tse-tung and Pol Pot instead” (p.293).

Cold War differences also upended Sartre and Beauvoir’s friendship with their contemporaries and formerly close companions Merleau-Ponty and Camus. Bakewell describes Merleau Ponty as the “happy philosopher of things as they are” (p.326), the sole thinker at Bakewell’s Existentialist Café who seemed to have had a happy childhood. Beauvoir once considered Merleau-Ponty, born months before her in 1908, potential boyfriend material before concluding that his sunny bourgeois outlook was a poor fit with her more combative disposition. On the cover, Merleau-Ponty is the only one of the three men dressed in a suit and tie, and he seems in this account a little out of place at the Existentialist café — the fellow who joins the gang for a few drinks after a day’s work, then catches the train back to a suburban home to spend the rest of the evening with the wife and kids.

But if the non-Bohemian Merleau-Ponty was out of place at the Existentialist Café, Bakewell considers him the “intellectual hero” of her story for providing the fullest description of “how we live from moment to moment, and thus of what we are” (p.325). Merleau-Ponty brought the insights of psychology and cognitive science to the study of philosophy, and in particular elevated child psychology as an essential component of philosophy, an “extraordinary insight.” Apart from Rousseau, Bakewell notes,  few philosophers before Merleau-Ponty had taken childhood seriously.  Most “wrote as though all human experience were that of a fully conscious, rational, verbal adult who has been dropped into this world from the sky – perhaps by a stork” (p.231).  Very favorable to Communism in the 1940s, Merleau-Ponty became disaffected with its ideological rigidity in the 1950s, at  the time of the Korean War. He laid out his case against Communism in a 1955 book, Adventures of the Dialectic, which included a chapter entitled “Sartre and Ultrabolshevism” that criticized Sartre’s political writings for their inconsistencies and lack of practicality. The work prompted a rift between the two men that healed only upon Merleau-Ponty’s death in 1961 from a heart attack at age 53, when Sartre wrote a glowing obituary about his one-time friend.

Camus is the “new kid on the block” at Bakewell’s Existential Café, a brash outsider from Algeria unwilling to be intimidated by Sartre (although quite willing to be charmed by Beauvoir). Camus’ vision was embodied in his 1942 piece, The Myth of Sisyphus where he argued, as Bakewell puts it, that we must “decide whether to give up or keep going. If we keep going, it must be on the basis of accepting that there is no ultimate meaning to what we do” (p.150). Sartre and Beauvoir rejected Camus’ vision. For them, Bakewell emphasizes, “life is not absurd . . . Life for them is full of real meaning, although that meaning emerges differently for each of us” (p.151).  Camus’ 1951 essay The Rebel laid out a theory of rebellion and political activism that Sartre viewed as an attack upon Soviet Communism and its fellow travelers, notably himself. Dismissing The Rebel as an apology for capitalism, Sartre never forgave Camus for “playing into the hands of the right at a delicate historical moment” (p.257). But when Camus died tragically in an automobile accident in 1960 at age 43, Sartre wrote a glowing obituary, as he did the following year for Merleau-Ponty.

* * *

            Throughout much of history, Bakewell notes, philosophy has been primarily the purview of scholars who “prided themselves on their discipline’s exquisite uselessness” (p.17). Bakewell demonstrates how Sartre, Beauvoir and the other thinkers at her Existentialist Café broke that mold, shaping what she terms “philosophy as a way of life” (p.17).  She further demonstrates how a skillful writer can bring philosophy as a way of life to life through a narrative exquisitely engaging for general readers and specialists alike.

     Thomas H. Peebles

La Châtaigneraie, France

April 20, 2017

 

 

 

4 Comments

Filed under Biography, France, French History, Intellectual History

Portrait of a President Living on Borrowed Time

Joseph Lelyveld, His Final Battle:

The Last Months of Franklin Roosevelt 

            During the last year and a half of his life, from mid-October 1943 to his death in Warm Springs, Georgia on April 12, 1945, Franklin D. Roosevelt’s presidential plate was full, even overflowing. He was grappling with winning history’s most devastating  war and structuring a lasting peace for the post-war global order, all the while tending to multiple domestic political demands. But Roosevelt spent much of this time out of public view in semi-convalescence, often in locations outside Washington, with limited contact with the outside world. Those who met the president, however, noticed a striking weight loss and described him with words like “listless,” “weary,” and “easily distracted.” We now know that Roosevelt had life-threatening high blood pressure, termed malignant hypertension, making him susceptible to a stroke or coronary attack at any moment. Roosevelt’s declining health was carefully shielded from the public and only rarely discussed directly, even within his inner circle. At the time, probably not more than a handful of doctors were aware of the full gravity of Roosevelt’s physical condition, and it is an open question whether Roosevelt himself was aware.

In His Final Battle: The Last Months of Franklin Roosevelt, Joseph Lelyveld, former executive editor of the New York Times, seeks to shed light upon, if not answer, this open question. Lelyveld suggests that the president likely was more aware than he let on of the implications of his declining physical condition. In a resourceful portrait of America’s longest serving president during his final year and a half, Lelyveld considers Roosevelt’s political activities against the backdrop of his health. The story is bookended by Roosevelt’s meetings to negotiate the post-war order with fellow wartime leaders Winston Churchill and Joseph Stalin, in Teheran in December 1943 and at Yalta in the Crimea in February 1945. Between the two meetings came Roosevelt’s 1944 decision to run for an unprecedented fourth term, a decision he reached just weeks prior to the Democratic National Convention that summer, and the ensuing campaign.

Lelyveld’s portrait of a president living on borrowed time emerges from an excruciatingly thin written record of Roosevelt’s medical condition. Roosevelt’s medical file disappeared without explanation from a safe at Bethesda Naval Hospital shortly after his death.   Unable to consider Roosevelt’s actual medical records, Lelyveld draws clues  concerning his physical condition from the diary of Margaret “Daisy” Suckley, discovered after Suckley’s death in 1991 at age 100, and made public in 1995. The slim written record on Roosevelt’s medical condition limits Lelyveld’s ability to tease out conclusions on the extent to which that condition may have undermined his job performance in his final months.

* * *

            Daisy Suckley, a distant cousin of Roosevelt, was a constant presence in the president’s life in his final years and a keen observer of his physical condition. During Roosevelt’s last months, the “worshipful” (p.3) and “singularly undemanding” Suckley had become what Lelyveld terms the “Boswell of [Roosevelt’s] rambling ruminations,” secretly recording in an “uncritical, disjointed way the hopes and daydreams” that occupied the frequently inscrutable president (p.75). By 1944, Lelyfeld notes, there was “scarcely a page in Daisy’s diary without some allusion to how the president looks or feels” (p.77).   Lelyveld relies heavily upon the Suckley diary out of necessity, given the disappearance of Roosevelt’s actual medical records after his death.

Lelyveld attributes the disappearance to Admiral Ross McIntire, an ears-nose-and-throat specialist who served both as Roosevelt’s personal physician and Surgeon General of the Navy. In the latter capacity, McIntire oversaw a wartime staff of 175,000 doctors, nurses and orderlies at 330 hospitals and medical stations around the world. Earlier in his career, Roosevelt’s press secretary had upbraided McIntire for allowing the president to be photographed in his wheel chair. From that point forward, McIntire understood that a major component of his job was to conceal Roosevelt’s physical infirmities and protect and promote a vigorously healthy public image of the president. The “resolutely upbeat” (p.212) McIntire, a master of “soothing, well-practiced bromides” (p.226), thus assumes a role in Lelyveld’s account which seems as much “spin doctor” as actual doctor. His most frequent message for the public was that the president was in “robust health” (p.22), in the process of “getting over” a wide range of lesser ailments such as a heavy cold, flu, or bronchitis.

A key turning point in Lelyveld’s story occurred in mid-March 1944, 13 months prior to Roosevelt’s death, when the president’s daughter Anna Roosevelt Boettiger confronted McIntire and demanded to know more about what was wrong with her father. McIntire doled out his “standard bromides, but this time they didn’t go down” (p.23). Anna later said that she “didn’t think McIntire was an internist who really knew what he was talking about” (p.93). In response, however, McIntire brought in Dr. Howard Bruenn, the Navy’s top cardiologist. Evidently, Lelyveld writes, McIntire had “known all along where the problem was to be found” (p.23). Breunn was apparently the first cardiologist to have examined Roosevelt.

McIntire promised to have Roosevelt’s medical records delivered to Bruenn prior to his initial examination of the president, but failed to do so, an “extraordinary lapse” (p.98) which Lelyveld regards as additional evidence that McIntire was responsible for the disappearance of those records after Roosevelt’s death the following year. Breunn found that Roosevelt was suffering from “acute congestive heart failure” (p.98). He recommended that the wartime president avoid “irritation,” severely cut back his work hours, rest more, and reduce his smoking habit, then a daily pack and a half of Camel’s cigarettes. In the midst of the country’s struggle to defeat Nazi Germany and imperial Japan, its leader was told that he “needed to sleep half his time and reduce his workload to that of a bank teller” (p.99), Lelyveld wryly notes.  Dr. Bruenn saw the president regularly from that point onward, traveling with him to Yalta in February 1945 and to Warm Springs in April of that year.

Ten days after Dr. Bruenn’s diagnosis, Roosevelt told a newspaper columnist, “I don’t work so hard any more. I’ve got this thing simplified . . . I imagine I don’t work as many hours a week as you do” (p.103). The president, Lelyveld concludes, “seems to have processed the admonition of the physicians – however it was delivered, bluntly or softly – and to be well on the way to convincing himself that if he could survive in his office by limiting his daily expenditure of energy, it was his duty to do so” (p.103).

At that time, Roosevelt had not indicated publicly whether he wished to seek a 4th precedential term and had not discussed this question with any of his advisors. Moreover, with the “most destructive military struggle in history approaching its climax, there was no one in the White House, or his party, or the whole of political Washington, who dared stand before him in the early months of 1944 and ask face-to-face for a clear answer to the question of whether he could contemplate stepping down” (p.3). The hard if unspoken political truth was that Roosevelt was the Democratic party’s only hope to retain the White House. There was no viable successor in the party’s ranks. But his re-election was far from assured, and public airing of concerns about his health would be unhelpful to say the least in his  re-election bid. Roosevelt did not make his actual decision to run until just weeks before the 1944 Democratic National Convention in Chicago.

At the convention, Roosevelt’s then vice-president, Henry Wallace, and his counselors Harry Hopkins, and Jimmy Byrnes jockeyed for the vice-presidential nomination, along with William Douglas, already a Supreme Court justice at age 45. There’s no indication that Senator Harry S. Truman actively sought to be Roosevelt’s running mate. Lelyveld writes that it is tribute to FDR’s “wiliness” that the notion has persisted over the years that he was “only fleetingly engaged in the selection” of his 1944 vice-president and that he was “simply oblivious when it came to the larger question of succession” (p.172). To the contrary, although he may not have used the used the word “succession” in connection with his vice-presidential choice, Roosevelt “cared enough about qualifications for the presidency to eliminate Wallace as a possibility and keep Byrnes’s hopes alive to the last moment, when, for the sake of party unity, he returned to Harry Truman as the safe choice” (p.172-73).

Having settled upon Truman as his running mate, Roosevelt indicated that he did not want to campaign as usual because the war was too important. But campaign he did, and Lelyveld shows how hard he campaigned – and how hard it was for him given his deteriorating health, which aggravated his mobility problems. The outcome was in doubt up until Election Day, but Roosevelt was resoundingly reelected to a fourth presidential term. The president could then turn his full attention to the war effort, focusing both upon how the war would be won and how the peace would be structured. Roosevelt’s foremost priority was structuring the peace; the details on winning the war were largely left to his staff and to the military commanders in the field.

Roosevelt badly wanted to avoid the mistakes that Woodrow Wilson had made after World War I. He was putting together the pieces of an organization already referred to as the United Nations and fervently sought  the participation and support of his war ally, the Soviet Union. He also wanted Soviet support for the war against Japan in the Pacific after the Nazi surrender, and for an independent and democratic Poland. In pursuit of these objectives, Roosevelt agreed to travel over 10,000 arduous miles to Yalta, to meet in February 1945 with Stalin and Churchill.

In Roosevelt’s mind, Stalin  was by then both the key to victory on the battlefield and for a lasting peace afterwards — and he was, in Roosevelt’s phrase, “get-at-able” (p.28) with the right doses of the legendary Roosevelt charm.   Roosevelt had begun his serious courtship of the Soviet leader at their first meeting in Teheran in December 1943.  His fixation on Stalin, “crossing over now and then into realms of fantasy” (p.28), continued at Yalta. Lelyveld’s treatment of Roosevelt at Yalta covers similar ground to that in Michael Dobbs’ Six Months That Shook the World, reviewed here in April 2015. In Lelyveld’s account, as in that of Dobbs, a mentally and physical exhausted Roosevelt at Yalta ignored the briefing books his staff prepared for him and relied instead upon improvisation and his political instincts, fully confident that he could win over Stalin by force of personality.

According to cardiologist Bruenn’s memoir, published a quarter of a century later, early in the conference Roosevelt showed worrying signs of oxygen deficiency in his blood. His habitually high blood pressure readings revealed a dangerous condition, pulsus alternans, in which every second heartbeat was weaker than the preceding one, a “warning signal from an overworked heart” (p.270).   Dr. Bruenn ordered Roosevelt to curtail his activities in the midst of the conference. Churchill’s physician, Lord Moran, wrote that Roosevelt had “all the symptoms of hardening of arteries in the brain” during the conference and gave the president “only a few months to live” (p.270-71). Churchill himself commented that his wartime ally “really was a pale reflection almost throughout” (p.270) the Yalta conference.

Yet, Roosevelt recovered sufficiently to return home from the conference and address Congress and the public on its results, plausibly claiming victory. The Soviet Union had agreed to participate in the United Nations and in the war in Asia, and to hold what could be construed as free elections in Poland. Had he lived longer, Roosevelt would have seen that Stalin delivered as promised on the Asian war. The Soviet Union also became a member of the United Nations and maintained its membership in the organization until its dissolution in 1991, but was rarely if ever the partner Roosevelt envisioned in keeping world peace. The possibility of a democratic Poland, “by far the knottiest and most time-consuming issue Roosevelt confronted at Yalta” (p.285), was by contrast slipping away even before Roosevelt’s death.

At one point in his remaining weeks, Roosevelt exclaimed, “We can’t do business with Stalin. He has broken every one of the promises he made at Yalta” on Poland (p.304; Dobbs includes the same quotation, adding that Roosevelt thumped on his wheelchair at the time of this outburst). But, like Dobbs, Lelyveld argues that even a more physically fit, fully focused and coldly realistic Roosevelt would likely have been unable to save Poland from Soviet clutches. When the allies met at Yalta, Stalin’s Red Army was in the process of consolidating military control over almost all of Polish territory.  If Roosevelt had been at the peak of vigor, Lelyveld concludes, the results on Poland “would have been much the same” (p.287).

Roosevelt was still trying to mend fences with Stalin on April 11, 1945, the day before his death in Warm Springs. Throughout the following morning, Roosevelt worked on matters of state: he received an update on the US military advances within Germany and even signed a bill, sustaining the Commodity Credit Corporation. Then, just before lunch Roosevelt collapsed. Dr. Bruenn arrived about 15 minutes later and diagnosed a hemorrhage in the brain, a stroke likely caused by the bursting of a blood vessel in the brain or the rupture of an aneurysm. “Roosevelt was doomed from the instant he was stricken” (p.323).  Around midnight, Daisy Suckley recorded in her diary that the president had died at 3:35 pm that afternoon. “Franklin D. Roosevelt, the hope of the world, is dead,” (p.324), she wrote.

Daisy was one of several women present at Warm Springs to provide company to the president during his final visit. Another was Eleanor Roosevelt’s former Secretary, Lucy Mercer Rutherford, by this time the primary Other Woman in the president’s life. Rutherford had driven down from South Carolina to be with the president, part of a recurring pattern in which Rutherford appeared in instances when wife Eleanor was absent, as if coordinated by a social secretary with the knowing consent of all concerned. But this orchestration broke down in Warm Springs in April 1945. After the president died, Rutherford had to flee in haste to make room for Eleanor. Still another woman in the president’s entourage, loquacious cousin Laura Delano, compounded Eleanor’s grief by letting her know that Rutherford had been in Warm Springs for the previous three days, adding gratuitously that Rutherford had also served as hostess at occasions at the White House when Eleanor was away. “Grief and bitter fury were folded tightly in a large knot” (p.325) for the former First Lady at Warm Springs.

Subsequently, Admiral McIntire asserted that Roosevelt had a “stout heart” and that his blood pressure was “not alarming at any time” (p.324-25), implying that the president’s death from a stroke had proven that McIntire had “always been right to downplay any suggestion that the president might have heart disease.” If not a flat-out falsehood, Lelyveld argues, McIntire’s assertion “at least raises the question of what it would have taken to alarm him” (p.325). Roosevelt’s medical file by this time had gone missing from the safe at Bethesda Naval Hospital, most likely removed by the Admiral because it would have revealed the “emptiness of the reassurances he’d fed the press and the public over the years, whenever questions arose about the president’s health” (p.325).

* * *

           Lelyveld declines to engage in what he terms an “argument without end” (p.92) on the degree to which Roosevelt’s deteriorating health impaired his job performance during his last months and final days. Rather, he  skillfully pieces together the limited historical record of Roosevelt’s medical condition to add new insights into the ailing but ever enigmatic president as he led his country nearly to the end of history’s most devastating war.

 

Thomas H. Peebles

La Châtaigneraie, France

March 28, 2017

 

 

 

3 Comments

Filed under American Politics, Biography, European History, History, United States History, World History

High Point of Modern International Economic Diplomacy

Ed Conway, The Summit: Bretton Woods 1944,

J.M. Keynes and the Reshaping of the Global Economy 

               During the first three weeks of July 1944, as World War II raged on the far sides of the Atlantic and Pacific oceans, 730 delegates from 44 countries gathered at the Mount Washington Hotel in Northern New Hampshire for what has come to be known as the Bretton Woods conference. The conference’s objective was audacious: create a new and more stable framework for the post-World War II monetary order, with the hope of avoiding future economic upheavals like the Great Depression of the 1930s.   To this end, the delegates reconsidered and in many cases rewrote some of the most basic rules of international finance and global capitalism, such as how money should flow between sovereign states, how exchange rates should interact, and how central banks should set interest rates. The conference took place at the venerable but aging Mount Washington Hotel, in an area informally known as Bretton Woods, not far from Mount Washington itself, Eastern United States’ highest peak.

In The Summit, Bretton Woods, 1944: J.M. Keynes and the Reshaping of the Global Economy, Ed Conway, formerly economics editor for Britain’s Daily Telegraph and Sunday Telegraph and presently economics editor for Sky News, provides new and fascinating detail about the conference. The word “summit” in his title carries a triple sense: it refers to Mount Washington and to the term that came into use in the following decade for a meeting of international leaders. But Conway also contends that the Bretton Woods conference now appears to have been another sort of summit. The conference marked the “only time countries ever came together to remold the world’s monetary system” (p.xx).  It stands in history as the “very highest point of modern international economic diplomacy” (p.xxv).

Conway differentiates his work from others on Bretton Woods by focusing on the interactions among the delegates and the “sheer human drama” (p.xxii) of the event.  As the sub-title indicates, British economist John Maynard Keynes is forefront among these delegates. Conway could have added to his subtitle the lesser-known Harry Dexter White, Chief International Economist at the US Treasury Department and Deputy to Treasury Secretary Henry Morgenthau, the head of the US delegation and formal president of the conference.  White’s name in the subtitle would have underscored that this book is a story about  the relationship between the two men who assumed de facto leadership of the conference. But the book is also a story about the uneasy relationship at Bretton Woods between the United States and the United Kingdom, the conference’s two lead delegations.

Although allies in the fight against Nazi Germany, the two countries were far from allies at Bretton Woods.  Great Britain, one of the world’s most indebted nations, came to the conference unable to pay for its own defense in the war against Nazi Germany and unable to protect and preserve its vast worldwide empire.  It was utterly outmatched at Bretton Woods by an already dominant United States, its principal creditor, which had little interest in providing debt relief to Britain or helping it maintain an empire. Even the force of Keynes’ dominating personality was insufficient to give Britain much more than a supplicant’s role at Bretton Woods.

Conway’s book also constitutes a useful and understandable historical overview of the international monetary order from pre-World War I days up to Bretton Woods and beyond.  The overview revolves around the gold standard as a basis for international currency exchanges and attempts over the years to find workable alternatives. Bretton Woods produced such an alternative, a standard pegged to the United States dollar — which, paradoxically, was itself tied to the price of gold.  Bretton Woods also produced two key institutions, the International Monetary Fund (IMF) and the International Bank for Reconstruction and Development, now known as the World Bank, designed to provide stability to the new economic order. But the Bretton Woods dollar standard remained in effect only until 1971, when US President Richard Nixon severed by presidential fiat the link between the dollar and gold, allowing currency values to float, as they had done in the 1930s.  In Conway’s view, the demise of Bretton Woods is to be regretted.

* * *

          Keynes was a legendary figure when he arrived at Bretton Woods in July 1944, a “genuine international celebrity, the only household name at Bretton Woods” (p.xv). Educated at Kings College, Cambridge, a member of the faculty of that august institution, and a peer in Britain’s House of Lords, Keynes was also a highly skilled writer and journalist, as well as a fearsome debater.  As a young man, he  established his reputation  with a famous critique of the 1919 Versailles Treaty, The Economic Consequences of the Peace, a tract that predicted with eerie accuracy the breakdown of the financial order that the post World War I treaty envisioned, based upon imposition of punitive reparations upon Germany. Although Keynes dazzled fellow delegates at Bretton Woods with his rhetorical brilliance, he was given to outlandish and provocative statements that hardly helped the bonhomie of the conference.   He suffered a heart attack toward the end of the conference and died less than two years later.

White was a contrast to Keynes in just about every way. He came from a modest first generation Jewish immigrant family from Boston and had to scramble for his education. Unusual for the time, in his 30s White earned an undergraduate degree from Stanford after having spent the better portion of a decade as a social worker. White had a dour personality, with none of Keynes’ flamboyance. Then there were the physical differences.   Keynes stood about six feet six inches tall (approximately 2.0 meters), whereas White was at least a foot smaller (approximately 1.7 meters). But if Keynes was the marquee star of the Bretton Woods because of his personality and reputation, White was its driving force because he represented the United States, undisputedly the conference’s driving force.

By the time of the Bretton Woods conference, however, White was also unduly familiar with Russian intelligence services. Although Conway hesitates to slap the “spy” label on him, there is little doubt that White provided a hefty amount of information to the Soviets, both at the conference and outside its confines. Of course, much of the “information sharing” took place during World War II, when the Soviet Union was allied with Britain and the United States in the fight against Nazi Germany and such sharing was seen in a different light than in the subsequent Cold War era.  One possibility, Conway speculates, was that White was “merely carrying out his own, personal form of diplomacy – unaware that the Soviets were construing this as espionage” (p.159; the Soviet Union attended the conference but did not join the international mechanisms which the conference established).

The reality, Conway concludes, is that we will “never know for certain whether White knowingly betrayed his country by passing information to the Soviets” (p.362).   Critically, there is “no evidence that White’s Soviet activities undermined the Bretton Woods agreement itself” (p.163;). White died in 1948, four years after the conference, and the FBI’s case against him became moot. From that point onward, the question whether White was a spy for the Soviet Union became one almost exclusively for historians, a question that today remains unresolved (ironically, after White’s death, young Congressman Richard Nixon remained just about the only public official still interested in White’s case; when Nixon became president two decades later, he terminated the Bretton Woods financial standards White had helped create).

The conference itself begins at about the book’s halfway point. Prior to his account of its deliberations, Conway shows how the gold standard operated and the search for workable alternatives. In the period up to World War I, the world’s powers guaranteed that they could redeem their currency for its value in gold. The World War I belligerents went off the gold standard so they could print the currency needed to pay for their war costs, causing hyperinflation, as the supply of money overwhelmed the demand.  In the 1920s, countries gradually resorted back to the gold standard.

But the stock market crash of 1929 and ensuing depression prompted countries to again abandon the gold standard. In the 1930s, what Conway terms a “gold exchange standard” prevailed, in which governments undertook competitive devaluations of their currency. President Franklin Roosevelt, for example, used a “primitive scheme” to set the dollar “where he wanted it – which meant as low against the [British] pound as possible” (p.83).  The competitive devaluations and floating rates of the 1930s led to restrictive trade policies, discouraged trade and investment, and encouraged destabilizing speculation, all of which many economists linked to the devastating war that broke out across the globe at the end of the decade.

Bretton Woods sought to eliminate these disruptions for the post-war world by crafting an international monetary system based upon cooperation among the world’s sovereign states. The conference was preceded by nearly two years of negotiations between the Treasury Departments of Great Britain and the United States — essentially exchanges between Keynes and White, each with a plan on how a new international monetary order should operate. Both were “determined to use the conference to safeguard their own economies” (p.18). Keynes wanted to protect not only the British Empire but also London’s place as the center of international finance. White saw little need to protect the empire and foresaw New York as the world’s new economic hub.  He also wanted to locate the two institutions that Bretton Woods would create, the IMF and World Bank, in the United States, whereas Keynes hoped that at least one would be located either in Britain or on the European continent. White and the Americans would win on these and almost all other points of difference.

But Keynes and White shared a broad general vision that Bretton Woods should produce a system designed to do away with the worst effects of both the gold standard and the interwar years of instability and depression.   There needed to be something in between the rigidity associated with the gold standard on the one hand and free-floating currencies, which were “associated with dangerous flows of ‘hot money’ and inescapable lurches in exchange rates” (p.124), on the other. To White and the American delegation, “Bretton Woods needed to look as similar as possible to the gold standard: politicians’ hands should be tied to prevent them from inflating away their debts. It was essential to avoid the threat of the competitive devaluations that had wreaked such havoc in the 1930s” (p.171).  For Keynes and his colleagues, “Bretton Woods should be about ensuring stable world trade – without the rigidity of the gold standard” (p.171).

The British and American delegations met in Atlantic City in June 1944 in an attempt to narrow their differences before travelling to Northern New Hampshire, where the floor would be opened to the conference’s additional delegations.  Much of what happened at Bretton Woods was confined to the business pages of the newspapers, with attention focused on the war effort and President Roosevelt’s re-election bid for a fourth presidential term.  This suited White, who “wanted the conference to look as uncontroversial, technical and boring as possible” (p.203).  The conference was split into three main parts. White chaired Commission I, dealing with the IMF, while Keynes chaired Commission II, whose focus was the World Bank.  Each commission divided into multiple committees and sub-committees.  Commission III, whose formal title was “Other Means of International Cooperation,” was in Conway’s view essentially a “toxic waste dump into which White and Keynes could jettison some of the summit’s trickier issues” (p.216).

The core principle to emerge from the Bretton Woods deliberations was that the world’s currencies, rather than being tied directly to gold or allowed to float, would be pegged to the US dollar which, in turn, was tied to gold at a value of $35 per ounce. Keynes and White anticipated that fixing currencies against the dollar would ensure that:

international trade was protected for exchange rate risk. Nations would determine their own interest rates for purely domestic economic reasons, whereas under the gold standard, rates had been set primarily in order to keep the country’s gold stocks at an acceptable level. Countries would be allowed to devalue their currency if they became uncompetitive – but they would have to notify the International Monetary Fund in advance: this element of international co-ordination was intended to guard against a repeat of the 1930s spiral of competitive devaluation (p.369).

 

The IMF’s primary purpose under the Bretton Woods framework was to provide relief in balance of payments crises such as those of the 1930s, when countries in deficit were unable to borrow and exporting countries failed to find markets for their goods. “Rather than leaving the market to its own devices – the laissez-faire strategy discredited in the Depression – the Fund would be able to step in and lend countries money, crucially in whichever currency they most needed. So as to avoid the threat of competitive devaluations, the Fund would also arbitrate whether a country could devalue its exchange rate” (p.169).

One of the most sensitive issues in structuring the IMF involved the contributions that each country was required to pay into the Fund, termed “quotas.” When short of reserves, each member state would be entitled to borrow needed foreign currency in amounts determined by the size of its quota.  Most countries wanted to contribute more rather than less, both as a matter of national pride and as a means to gain future leverage with the Fund. Heated quota battles ensued “both publicly in the conference rooms and privately in the hotel corridors, until the very end of the proceedings” (p.222-23), with the United States ultimately determining quota amounts according to a process most delegations considered opaque and secretive.

The World Bank, almost an afterthought at the conference, was to have the power to finance reconstruction in Europe and elsewhere after the war.  But the Marshall Plan, an “extraordinary program of aid devoted to shoring up Europe’s economy” (p.357), upended Bretton Woods’ visions for both institutions for nearly a decade.  It was the Marshall Plan that rebuilt Europe in the post-war years, not the IMF or the World Bank. The Fund’s main role in its initial years, Conway notes, was to funnel money to member countries “as a stop-gap before their Marshall Plan aid arrived” (p.357),

When Harry Truman became President in April 1945 after Roosevelt’s death, he replaced Roosevelt’s Treasury Secretary Henry Morgenthau, White’s boss, with future Supreme Court justice Fred Vinson. Never a fan of White, Vinson diminished his role at Treasury and White left the department in 1947. He died the following year, in August 1948 at age 55.  Although the August 1945 change in British Prime Ministers from Winston Churchill to Clement Atlee did not undermine Keynes to the same extent, his deteriorating health diminished his role after Bretton Woods as well. Keynes died in April 1946 at age 62, shortly after returning to Britain from the inaugural IMF meeting in Savannah, Georgia, his last encounter with White.

Throughout the 1950s, the US dollar assumed a “new degree of hegemony,” becoming “formally equivalent to gold. So when they sought to bolster their foreign exchange reserves to protect them from future crises, foreign governments built up large reserves of dollars” (p.374). But with more dollars in the world economy, the United States found it increasingly difficult to convert them back into gold at the official exchange rate of $35 per ounce.  When Richard Nixon became president in 1969, the United States held $10.5 billion in gold, but foreign governments had $40 billion in dollar reserves, and foreign investors and corporations held another $30 billion. The world’s monetary system had become, once again, an “inverted pyramid of paper money perched on a static stack of gold” and Bretton Woods was “buckling so badly it seemed almost certain to collapse” (p.377).

In a single secluded weekend in 1971 at the Presidential retreat at Camp David, Maryland, Nixon’s advisors fashioned a plan to “close the gold window”: the United States would no longer provide gold to official foreign holders of dollars and instead would impose “aggressive new surcharges and taxes on imports intended to push other countries into revaluing their own currencies” (p.381).  When Nixon agreed to his advisors’ proposal,  the Bretton Woods system, which had “begun with fanfare, an unprecedented series of conferences and the deepest investigation in history into the state of macro-economics” ended overnight, “without almost anyone realizing it” (p.385). The era of fixed exchange rates was over, with currency values henceforth to be determined by “what traders and investors thought they were worth” (p.392).  Since 1971, the world’s monetary system has operated on what Conway describes as an “ad hoc basis, with no particular sense of the direction in which to follow” (p.401).

* * *

            In his epilogue, Conway cites a 2011 Bank of England study that showed that between 1948 and the early 1970s, the world enjoyed a “period of economic growth and stability that has never been rivaled – before or since” (p.388).  In Bretton Woods member states during this period “life expectancy climbed swiftly higher, inequality fell, and social welfare systems were constructed which, for the time being at least, seemed eminently affordable” (p.388).  The “imperfect” and “short-lived” (p.406) system which Keynes and White fashioned at Bretton Woods may not be the full explanation for these developments but it surely contributed.  In the messy world of international economics, that system has “come to represent something hopeful, something closer to perfection” (p.408).  The two men at the center of this captivating story came to Bretton Woods intent upon repairing the world’s economic system and replacing it with something better — something that might avert future economic depressions and the resort to war to settle differences.  “For a time,” Conway concludes, “they succeeded” (p.408).

Thomas H. Peebles

La Châtaigneraie, France

March 8, 2017

7 Comments

Filed under British History, European History, History, United States History, World History

Do Something

zachary-1

zachary-2

Zachary Kaufman, United States Law and Policy on Transitional Justice:

Principles, Politics, and Pragmatics 

             The term “transitional justice” is applied most frequently to “post conflict” situations, where a nation state or region is emerging from some type of war or violent conflict that has given rise to genocide, war crimes, or crimes against humanity — each now a recognized concept under international law, with “mass atrocities” being a common shorthand used to embrace these and related concepts. In United States Law and Policy on Transitional Justice: Principles, Politics, and Pragmatics, Zachary Kaufman, a Senior Fellow and expert on human rights at Harvard University’s Kennedy School of Government, explores the circumstances which have led the United States to support that portion of the transitional justice process that determines how to deal with suspected perpetrators of mass atrocities, and why it chooses a particular means of support (disclosure: Kaufman and I worked together in the US Department of Justice’s overseas assistance unit between 2000 and 2002, although we had different portfolios: Kaufman’s involved Africa and the Middle East, while I handled Central and Eastern Europe).

          Kaufman’s book, adapted from his Oxford University PhD dissertation, centers around case studies of the United States’ role in four major transitional justice situations: Germany and Japan after World War II, and ex-Yugoslavia and Rwanda in the 1990s, after the end of the Cold War. It also looks more briefly at two secondary cases, the 1988 bombing of Pan American flight 103, attributed to Libyan nationals, and atrocities committed during Iraq’s 1990-91 occupation of Kuwait. Making extensive use of internal US government documents, many of which have been declassified, Kaufman digs deeply into the thought processes that informed the United States’ decisions on transnational justice in these six post-conflict situations. Kaufman brings a social science perspective to his work, attempting to tease of out of the case studies general rules about how the United States might act in future transitional justice situations.

          The term “transitional justice” implicitly affirms that a permanent and independent national justice system can and should be created or restored in the post-conflict state.  Kaufman notes at one point that dealing with suspected perpetrators of mass atrocities is just one of several critical tasks involved in creating or restoring a permanent national justice system in a post-conflict state.  Others can include: building or rebuilding sustainable judicial institutions, strengthening the post-conflict state’s legislation, improving capacity of its justice-sector personnel, and creating or upgrading the physical infrastructure needed for a functioning justice system. These latter tasks are not the focus of Kaufman’s work. Moreover, in determining how to deal with alleged perpetrators of mass atrocities, Kaufman’s focus is on the front end of the process: how and why the United States determined to support this portion of the process generally and why it chose particular mechanisms rather than others.   The outcomes that the mechanisms produce, although mentioned briefly, are not his focus either.

          In each of the four primary cases, the United States joined other nations to prosecuted those accused or suspected of involvement in mass atrocities before an international criminal tribunal, which Kaufman characterizes as the “most significant type of transitional justice institution” (p.12). Prosecution before an international tribunal, he notes, can promote stability, the rule of law and accountability, and can serve as a deterrent to future atrocities. But the process can be both slow and expensive, with significant political and legal risks. Kaufman’s work provides a useful reminder that prosecution by an international tribunal is far from the only option available to deal with alleged perpetrators of mass atrocities. Others include trials in other jurisdictions, including those of the post-conflict state, and several non-judicial alternatives: amnesty for those suspected of committing mass atrocities, with or without conditions; “lustration,” where suspected persons are disenfranchised from specific aspects of civic life (e.g., declared ineligible for the civil service or the military); and “doing nothing,” which Kaufman considers tantamount to unconditional amnesty.  Finally, there is the option of summary execution or other punishment, without benefit of trial. These options can be applied in combination, e.g., amnesty for some, trial for others.

         Kaufman weighs two models, “legalism” and “prudentialism,” as potential explanations for why and how the United States acted in the cases under study and is likely to act in the future.  Legalism contends that prosecution before an international tribunal of individuals suspected or accused of mass atrocities  is the only option a liberal democratic state may elect, consistent with its adherence to the rule of law.  In limited cases, amnesty or lustrations may be justified as a supplement to initiating cases before a tribunal. Summary execution may never be justified. Prudentialism is more ad hoc and flexible,with  the question whether to establish or invoke an international criminal tribunal or pursue other options determined by any number of different political, pragmatic and normative considerations, including such geo-political factors as promotion of stability in the post-conflict state and region, the determining state or states’ own national security interests, and the relationships between determining states. Almost by definition, legalism precludes consideration of these factors.

          Kaufman presents his cases in a highly systematic manner, with tight overall organization. An introduction and three initial chapters set forth the conceptual framework for the subsequent case studies, addressing matters like methodology and definitional parameters.  The four major cases are then treated in four separate chapters, each with its own introduction and conclusion, followed by an overall conclusion, also with its own introduction and conclusion (the two secondary cases, Libya and Iraq are treated within the chapter on ex-Yugoslavia).  Substantive headings throughout each chapter make his arguments easy to follow.   General readers may find jarring his extensive use of acronyms throughout the text, drawn from a three-page list contained at the outset. But amidst Kaufman’s deeply analytical exploration of the thinking that lay behind the United States’ actions, readers will appreciate his decidedly non-sociological hypothesis as to why the United States elects to engage in  the transitional justice process: a deeply felt American need in the wake of mass atrocities to “do something” (always in quotation marks).

* * *

          Kaufman begins his case studies with the best-known example of transitional justice, Nazi Germany after World War II. The United States supported creation of what has come to be known as the Nuremberg War Crimes tribunal, a military court administered by the four victorious allies, the United States, Soviet Union, Great Britain and France. The Nuremberg story is so well known, thanks in part to “Judgment at Nuremberg,” the best-selling book and popular film, that most readers will assume that the multi-lateral Nuremberg trials were the only option seriously under consideration at the time. To the contrary, Kaufman demonstrates that such trials were far from the only option on the table.

        For a while the United States seriously considered summary executions of accused Nazi leaders. British Prime Minister Winston Churchill pushed this option during wartime deliberations and, Kaufman indicates, President Roosevelt seemed at times on the cusp of agreeing to it. Equally surprisingly, Soviet Union leader Joseph Stalin lobbied early and hard for a trial process rather than summary executions. The Nuremberg Tribunal “might not have been created without Stalin’s early, constant, and forceful lobbying” (p.89), Kaufman contends.  Roosevelt abandoned his preference for summary executions after economic aspects of the Morgenthau Plan, which involved the “pastoralization” of Germany, were leaked to the press. When the American public “expressed its outrage at treating Germany so harshly through a form of economic sanctions,” Roosevelt concluded that Americans would be “unsupportive of severe treatment for the Germans through summary execution” (p.85).

          But the United States’ support for war crimes trials became unwavering only after Roosevelt died in April 1945 and Harry S. Truman assumed the presidency.  The details and mechanics of a multi-lateral trial process were not worked out until early August 1945 in the “London Agreement,” after Churchill had been voted out of office and Labor Prime Minister Clement Atlee represented Britain. Trials against 22 high level Nazi officials began in November 1945, with verdicts rendered in October 1946: twelve defendants were sentenced to death, seven drew prison sentences, and three were acquitted.

       Many lower level Nazi officials were tried in unilateral prosecutions by one of the allied powers.   Lustration, barring active Nazi party members from major public and private positions, was applied in the US, British, and Soviet sectors.  Numerous high level Nazi officials were allowed to emigrate to the United States to assist in Cold War endeavors, which Kaufman characterizes as a “conditional amnesty” (Nazi war criminals who emigrated to the United States is the subject of Eric Lichtblau’s The Nazis Next Door: How America Became a Safe Haven for Hitler’s Men, reviewed here in October 2015; Frederick Taylor’s Exorcising Hitler: The Occupation and Denazification of Germany, reviewed here in December 2012, addresses more generally the manner in which the Allies dealt with lower level Nazi officials). By 1949, the Cold War between the Soviet Union and the West undermined the allies’ appetite for prosecution, with the Korean War completing the process of diverting the world’s attention away from Nazi war criminals.

          The story behind creation of the International Military Tribunal for the Far East, designed to hold accountable accused Japanese perpetrators of mass atrocities, is far less known than that of Nuremberg, Kaufman observes.  What has come to be known as the “Tokyo Tribunal” largely followed the Nuremberg model, with some modifications. Even though 11 allies were involved, the United States was closer to the sole decision-maker on the options to pursue in Japan than it had been in Germany. As the lead occupier of post-war Japan, the United States had “no choice but to ‘do something’” (p.119).   Only the United States had both the means and will to oversee the post-conflict occupation and administration of Japan. That oversight authority was vested largely in a single individual, General Douglas MacArthur, Supreme Commander of the Allied forces, whose extraordinarily broad – nearly dictatorial — authority in post World War II Japan extended to the transitional justice process. MacArthur approved appointments to the tribunal, signed off on its indictments, and exercised review authority over its decisions.

            In the interest of securing the stability of post-war Japan, the United States accorded unconditional amnesty to Japan’s Emperor Hirohito. The Tokyo Tribunal indicted twenty-eight high-level Japanese officials, but more than fifty were not indicted, and thus also benefited from an unconditional amnesty. This included many suspected of “direct involvement in some of the most horrific crimes of WWII” (p.108), several of whom eventually returned to Japanese politics. Through lustration, more than 200,000 Japanese were removed or barred from public office, either permanently or temporarily.  As in Germany, by the late 1940s the emerging Cold War with the Soviet Union had chilled the United States’ enthusiasm for prosecuting Japanese suspected of war crimes.

           The next major United States engagements in transitional justice arose in the 1990s, when the former Yugoslavia collapsed and the country lapsed into a spasm of ethnic violence; and massive ethnic-based genocide erupted in Rwanda in 1994. By this time, the Soviet Union had itself collapsed and the Cold War was over. In both instances, heavy United States’ involvement in the post-conflict process was attributed in part to a sense of remorse for its lack of involvement in the conflicts themselves and its failure to halt the ethnic violence, resulting in a need to “do something.”  Rwanda marks the only instance among the four primary cases where mass atrocities arose out of an internal conflict.

       The ethnic conflicts in Yugoslavia led to the creation of the International Criminal Tribunal for Yugoslavia (ICTY), based in The Hague and administered under the auspices of the United Nations Security Council. Kaufman provides much useful insight into the thinking behind the United States’ support for the creation of the court and the decision to base it in The Hague as an authorized Security Council institution. His documentation shows that United States officials consistently invoked the Nuremberg experience. The United States supported a multi-lateral tribunal through the Security Council because the council could “obligate all states to honor its mandates, which would be critical to the tribunal’s success” (p.157). The United States saw the ICTY as critical in laying a foundation for regional peace and facilitating reconciliation among competing factions. But it also supported the ICTY and took a lead role in its design to “prevent it from becoming a permanent [tribunal] with global reach” (p.158), which it deemed “potentially problematic” (p.157).

             The United States’ willingness to involve itself in the post-conflict transitional process in Rwanda,   even more than in the ex-Yugoslavia, may be attributed to its failure to intervene during the worst moments of the genocide itself.  That the United States “did not send troops or other assistance to Rwanda perversely may have increased the likelihood of involvement in the immediate aftermath,” Kaufman writes. A “desire to compensate for its foreign policy failures in Rwanda, if not also feelings of guilt over not intervening, apparently motivated at least some [US] officials to support a transitional justice institution for Rwanda” (p.197).

        Once the Rwandan civil war subsided, there was a strong consensus within the international community that some kind of international tribunal was needed to impose accountability upon the most egregious génocidaires; that any such tribunal should operate under the auspices of the United Nations Security Council; that the tribunal should in some sense be modeled after the ICTY; and that the United States shouldtake the lead in establishing the tribunal. The ICTY precedent prompted US officials to “consider carefully the consistency with which they applied transitional justice solutions in different regions; they wanted the international community to view [the US] as treating Africans similarly to Europeans” (p.182). According to these officials, after the precedent of proactive United States involvement in the “arguably less egregious Balkans crisis,” the United States would have found it “politically difficult to justify inaction in post-genocide Rwanda” (p.182).

           The United States favored a tribunal modeled after and structurally similar to the ICTY, which came to be known as International Criminal Tribunal for Rwanda (ICTR). The ICTR was the first international court having competence to “prosecute and punish individuals for egregious crimes committed during an internal conflict” (p.174), a watershed development in international law and transitional justice.  To deal with lower level génocidaires, the Rwandan government and the international community later instituted additional prosecutorial measures, including prosecutions by Rwandan domestic courts and local domestic councils, termed gacaca.

          No international tribunals were created in the two secondary cases, Libya after the 1998 Pan Am flight 103 bombing, and the 1990-91 Iraqi invasion of Kuwait. At the time of the Pam Am bombing, several years prior to the September 11, 2001 attacks, United States officials considered terrorism a matter to be addressed “exclusively in domestic contexts” (p.156).  In the case of the bombing of Pan Am 103, where Americans had been killed, competent courts were available in the United States and the United Kingdom. There were numerous documented cases of Iraqi atrocities against Kuwaiti civilians committed during Iraq’s 1990-91 invasion of Kuwait.  But the 1991 Gulf War, while driving Iraq out of Kuwait, otherwise left Iraqi leader Saddam Hussein in power. The United States was therefore not in a position to impose accountability upon Iraqis for atrocities committed in Kuwait, as it had done after defeating Germany and Japan in World War II.

* * *

         In evaluating the prudentialism and legalism models as ways to explain the United States’ actions in the four primary cases, prudentialism emerges as a clear winner.  Kaufman convincingly demonstrates that the United States in each was open to multiple options and motivated by geo-political and other non-legal considerations.  Indeed, it is difficult to imagine that the United States – or any other state for that matter — would ever, in advance, agree to disregard such considerations, as the legalism model seems to demand. After reflecting upon Kaufman’s analysis, I concluded that legalism might best be understood as more aspirational than empirical, a forward-looking, prescriptive model as to how the United States should act in future transitional justice situations, favored in particular by human rights organizations.

         But Kaufman also shows that the United States’ approach in each of the four cases was not entirely an ad hoc weighing of geo-political and related considerations.  Critical to his analysis are the threads which link the four cases, what he terms “path dependency,” whereby the Nuremberg trial process for Nazi war criminals served as a powerful influence upon the process set up for their Japanese counterparts; the combined Nuremberg-Tokyo experience weighed heavily in the creation of ICTY; and ICTY strongly influenced the structure and procedure of ICTR.   This cumulative experience constitutes another factor in explaining why the United States in the end opted for international criminal tribunals in each of the four cases.

         If a general rule can be extracted from Kaufman’s four primary cases, it might therefore be that an international criminal tribunal has evolved into the “default option” for the United States in transitional justice situations,  showing the strong pull of the only option which the legalism model considers consistent with the rule of law.  But these precedents may exert less hold on US policy makers going forward, as an incoming administration reconsiders the United States’ role in the 21st century global order. Or, to use Kaufman’s apt phrase, there may be less need felt for the United States to “do something” in the wake of future mass atrocities.

Thomas H. Peebles

Venice, Italy

February 10, 2017

 

5 Comments

Filed under American Politics, United States History

Reporting From the Front Lines of the Enlightenment

boswell-1

boswell-2

Robert Zaretsky, Boswell’s Enlightenment

           The 18th century Enlightenment was an extraordinary time when religious skepticism rose across Europe and philosophes boldly asserted that man’s capacity for reason was the key to understanding both human nature and the nature of the universe.   In Boswell’s Enlightenment, Robert Zaretsky, Professor of History at the University of Houston, provides a highly personalized view of the Enlightenment as experienced by James Boswell (1740-1795), the faithful Scottish companion to Dr. Samuel Johnson and author of a seminal biography on the learned doctor.  The crux of Zaretsky’s story lies in  Boswell’s tour of the European continent between 1763 and 1765 – the “Grand Tour” – where, as a young man, Boswell encountered seemingly all the period’s leading thinkers, including Jean Jacques Rousseau and François-Marie Arouet, known to history as Voltaire, then Europe’s two best known philosophes. Zaretsky’s self-described purpose is to “place Boswell’s tour of the Continent, and situate the churn of his mind, against the intellectual and political backdrop of the Enlightenment” (p.16-17). Also figuring prominently in Zaretsky’s account are Boswell’s encounters prior to departing for Europe with several leading Scottish luminaries, most notably David Hume, Britain’s best-known religious skeptic. The account further includes the beginning phases of Boswell’s life-long relationship with Johnson, the “most celebrated literary figure in London” (p.71) and, for Boswell, already a “moral and intellectual rock” (p.227).

         But Zaretsky’s title is a delicious double entendre, for his book is simultaneously the intriguing story of Boswell’s personal coming of age in the mid-18th century – his “enlightenment” with a small “e” – amidst the intellectual fervor of his times. The young Boswell searching for himself  was more than a little sycophantic, with an uncommon facility to curry favor with the prominent personalities of his day – an unabashed 18th century celebrity hound.  But Boswell also possessed a fertile, impressionable mind, along with a young man’s zest to experience life in all its facets. Upon leaving for his Grand Tour, moreover, Boswell was already a prolific if not yet entirely polished writer who kept a detailed journal of his travels, much of which survives. In his journal, the introspective Boswell was a “merciless self-critic” (p.97). Yet, Zaretsky writes, Boswell’s ability to re-create conversations and characters in his journals makes him a “remarkable witness to his age” (p.15).  Few individuals “reported in so sustained and thorough a manner as did Boswell from the front lines of the Enlightenment” (p.13).

* * *

        In his prologue, Zaretsky raises the question whether the 18th century Enlightenment should be considered a unified phenomena, centered in France and radiating out from there; or whether it makes more sense to think of separate Enlightenments, such as, for example, both a Scottish and a French Enlightenment. This is a familiar theme to assiduous readers of this blog: in 2013, I reviewed Arthur Hermann’s exuberant claim to a distinct Scottish Enlightenment; and Gertrude Himmelfarb’s more sober argument for distinctive French, English and American Enlightenments. Without answering this always-pertinent question, Zaretsky turns his account to young Boswell’s search for himself and the greatest minds of 18th century Europe.

        Boswell was the son of a prominent Edinburgh judge, Alexander Boswell, Lord Auchinleck, a follower of John Knox’s stern brand of Calvinism and an overriding force in young Boswell’s life. Boswell’s effort to break the grip that his father exerted over his life was also in many senses an attempt to break the grip of his Calvinist upbringing. When as a law student in Edinburgh his son developed what Lord Auchinleck considered a most unhealthy interest in theatre — and women working in the theatre — he sent the wayward son from lively and overly liberal Edinburgh to more subdued Glasgow. There, Boswell came under the influence of renowned professor Adam Smith.  Although his arguments for the advantages of laissez faire capitalism came later, Smith was already a sensation across Europe for his view that empathy, or “fellow feeling,” was the key to understanding what makes human beings good.    A few years later, Lord Auchinleck started his son on his Grand Tour across the European continent by insisting that young Boswell study civil law in the Netherlands, as he had done in his student days.

        Throughout his travels, the young Boswell wrestled with the question of religious faith and how it might be reconciled with the demands of reason. The religious skepticism of Hume, Voltaire, and Rousseau weighed on him.  But, like Johnson, Boswell was not quite ready to buy into it. For Boswell, reason was “not equal to the task of absorbing the reality of our end, this thought of our death. Instead, religion alone offered respite” (p.241). In an age where death was a “constant and dire presence,” Boswell “stands out for his preoccupation, if not obsession, with his mortal end” (p.15). Boswell’s chronic “hypochondria” – the term used in Boswell’s time for depression — was “closely tied to his preoccupation with his mortality” (p.15).  For Boswell, like Johnson, the defense of traditional religion was “less fear of hell than fear of nothingness – what both men called ‘annihilation’” (p.85).

      Boswell’s fear of the annihilation of death probably helps explain his life long fascination with public executions. Throughout the Grand Tour, he consistently went out of his way to attend these very public 18th century spectacles, “transfixed by the ways in which the victims approached their last moments” (p.15). Boswell’s attraction to public executions, whose official justification was to “educate the public on the consequences of crime” was, Zaretsky notes, “exceptional even among his contemporaries” (p.80). But if the young Boswell feared death, he dove deeply into life and, through his journal, shared his dives with posterity.

        A prodigious drinker and carouser, Boswell seduced women across the continent, often the wives of men he was meeting to discuss the profound issues of life and death. At seemingly every stop along the way, moreover, he patronized establishments practicing the world’s oldest profession, with several bouts of gonorrhea resulting from these frequentations, followed by excruciatingly painful medical treatments. Boswell’s multiple encounters with the opposite sex form a colorful portion of his journal and are no small portion of the reason why the journal continues to fascinate readers to this day.

        But Boswell’s first significant encounter with the opposite sex during the Grand Tour was also his first significant encounter on the continent with an Enlightenment luminary, Elisabeth van Tuyell van Serooskerken, whom the young Scot wisely shortened to “Belle.” Boswell met Belle in Utrecht, the Netherlands, his initial stop on the Grand Tour, where he was ostensibly studying civil law. Belle, who went on to write several epistolary novels under her married name, Isabelle de Charrière, was a sophisticated religious skeptic who understood the “social and moral necessity of religion; but she also understood that true skepticism entailed, as Hume believed, a kind of humility and intellectual modesty” (p.127). Belle was not free of religious doubt, Zaretsky notes, but unlike Boswell, was “free of the temptation to seek certainty” (p.127).   Boswell was attracted to Belle’s “lightning” mind, which, as he wrote a friend, “flashes with so much brilliance [that it] may scorch” (p.117). But Belle was not nearly as smitten by Boswell as he was with her, and her father never bothered to pass to his daughter the marriage proposal that Boswell had presented to him. The two parted when Boswell left Utrecht, seeking to put his unrequited love behind him.

        Boswell headed from the Netherlands to German-speaking Prussia and its king, “enlightened despot” Frederick the Great.  Zaretsky considers Frederick “far more despotic than enlightened” (p.143), but Frederick plainly saw the value to the state of religious tolerance. “Here everyone must be allowed to go to heaven in his own way” (p.145) summarized Frederick’s attitude toward religion.  Frederick proved to be one of the era’s few luminaries who was “indifferent to the Scot’s irrepressible efforts at presenting himself to them” (p.141), and Boswell had little direct time with the Prussian monarch during his six month stay.

          But Boswell managed back-to-back visits with Rousseau and Voltaire in Switzerland, his next destination. Rousseau and Voltaire had both been banished from Catholic France for heretical religious views. Rousseau, who was born in Calvinist Geneva,  was no longer welcome in that city either because of his religious views.  Beyond a shared disdain for organized religion, the former friends disagreed about just about everything else — culture and civilization, theater and literature, politics and education.  Zaretsky’s chapter on these visits, entitled “The Distance Between Môtiers and Ferney” – a reference to the remote Swiss locations where, respectively, Rousseau and Voltaire resided — is in my view the book’s best, with an erudite overview of the two men’s wide ranging thinking, their reactions to their impetuous young visitor, and the enmity that separated them.

         Zaretsky describes Rousseau as a “poet of nature” (p.148), for whom religious doctrines led “not to God, but instead to oppression and war” (p.149).   But Rousseau also questioned his era’s advances in learning and the Enlightenment’s belief in human progress. The more science and the arts advanced, Rousseau argued, the more  contemporary society became consumed by personal gain and greed.  Voltaire, the “high priest of the French Enlightenment” (p.12), was a poet, historian and moralist who had fled from France to England in the 1730s because of his heretical religious views. There, he absorbed the thinking of Francis Bacon, John Locke and Isaac Newton, whose pragmatic approach and grounded reason he found superior to the abstract reasoning and metaphysical speculation that he associated with Descartes. While not an original or systematic thinker like Locke or Bacon, Voltaire was an “immensely gifted translator of their work and method” (p.172).

         By the time Boswell arrived in Môtiers, the two philosophes were no longer on speaking terms. Rousseau publicly termed Voltaire a “mountebank” and “impious braggart,” a man of “so much talent put to such vile use” (p.158). Voltaire returned the verbal fire with a string of vitriolic epithets, among them “ridiculous,” “depraved,” “pitiful,” and “abominable.” The clash between the two men went beyond epithets and name-calling. Rousseau publicly identified Voltaire as the author of Oath of the Fifty, a “brutal and hilarious critique of Christian scripture” (p.180). Voltaire, for his part, revealed that Rousseau had fathered five children with his partner Thérèse Levasseur, whom the couple subsequently abandoned.

        The enmity between the two men was not an obstacle to Boswell visiting each, although his actual meetings constitute a minor portion of the engrossing chapter. Boswell had an “improbable” five separate meetings with the usually reclusive Rousseau. They were wide-ranging, with the “resolute and relentless” Boswell pursing “questions great and small, philosophical and personal” (p.156). When Boswell pressed Rousseau on how religious faith could be reconciled with reason, however, Rousseau’s answer was, in essence, that’s for you to figure out. Boswell did not fare much better with Voltaire on how he might reconcile reason with religious faith.

          Unlike Rousseau, Voltaire was no recluse. He prided himself on being the “innkeeper of Europe” (p.174), and his residence at Ferney was usually overflowing with visitors. Despite spending several days at Ferney, Boswell managed a single one-on-one meeting with the man he described as the “Monarch of French Literature” (p.176). In a two-hour conversation that reached what Zaretsky terms “epic proportions” (p.178), the men took up the subject of religious faith. “If ever two men disputed with vehemence we did” (p.178), Boswell  wrote afterwards.  The young traveler wrote eight pages on the encounter in a document separate from his journal.  Alas, these eight pages have been lost to history. But we know that the traveler  left the meeting more than a little disappointed that Voltaire could not provide the definitive resolution he was seeking of how to bridge the chasm between reason and faith.

          After a short stay in Italy that included “ruins and galleries . . .brothels and bawdy houses. . .churches and cathedrals” (p.200), Boswell’s last stop on the Grand Tour was the island of Corsica, a distant and exotic location where few Britons had ever visited.  There, he met General Pasquale Paoli, leader of the movement for Corsican independence from the city-state of Genoa, which exercised control over most of the island. Paoli was already attracting attention throughout Europe for his determination to establish a republican government on the island.  Rousseau, who had been asked to write a constitution for an independent Corsica, wrote for Boswell a letter of introduction to Paoli.  During a six-day visit to the island, Paoli treated the mesmerized Boswell increasingly like a son. Paoli “embodied those ancient values that Boswell most admired, though frequently failed to practice: personal integrity and public authority; intellectual lucidity and stoic responsibility” (p.232). Paoli’s leadership of the independence movement demonstrated to Boswell that heroism was still alive, an “especially crucial quality in an age like his of philosophical and religious doubt” (p.217). Upon returning to Britain, Boswell became a vigorous advocate for Paoli and the cause of Corsican independence.

        Boswell’s tour on the continent ended — and Zaretsky’s narrative ends — with a dramatic flourish that Zaretsky likens to episodes in Henry Fielding’s then popular novel Tom Jones. While Boswell was in Italy, Rousseau and Thérèse were forced to flee Môitiers because of hostile reaction to Voltaire’s revelation about the couple’s five children. By chance, David Hume, who had been in Paris, was able to escort Rousseau into exile in England, leaving Thérèse temporarily behind. Boswell somehow got wind of Thérèse’s situation and, sensing an opportunity to win favor with Rousseau, eagerly accepted her request to escort her to England to join her partner.  But over the course of the 11-day trip to England, Boswell and Thérèse “found themselves sharing the same bed. Inevitably, Boswell recounted his sexual prowess in his journal: ‘My powers were excited and I felt myself vigorous’” (p.225). No less inevitably, Zaretsky notes, Boswell also recorded Thérèse’s “more nuanced response: ‘I allow that you are a hardy and vigorous lover, but you have no art’” (p.225).

* * *

       After following Boswell’s encounters across the continent with many of the period’s most illustrious figures, I was disappointed that Zaretsky does not return to the question he raises initially about nature of 18th century Enlightenment.   It would have been interesting to learn what conclusions, if any, he draws from Boswell’s journey. Does the young Scot’s partaking of the thoughts of Voltaire, Rousseau and others, and his championing the cause of Corsican independence, suggest a single movement indifferent to national and cultural boundaries? Or should Boswell best be considered an emissary of a peculiarly Scottish form of Enlightenment? Or was Boswell himself too young, too impressionable – too full of himself – to allow for any broader conclusions to be drawn from his youthful experiences about the nature of the 18th century Enlightenment? These unanswered questions constitute a missed opportunity in an otherwise engaging account of a young man seeking to make sense of the intellectual currents that were riveting his 18th century world and to apply them in his personal life.

Thomas H. Peebles

Florence, Italy

January 25, 2017

 

5 Comments

Filed under European History, History, Intellectual History, Religion

Stopping History

 

lilla-1

lilla-3

Mark Lilla, The Shipwrecked Mind:

On Political Reaction 

            Mark Lilla is one of today’s most brilliant scholars writing on European and American intellectual history and the history of ideas. A professor of humanities at Columbia University and previously a member of the Committee on Social Thought at the University of Chicago (as well as a native of Detroit!), Lilla first came to public attention in 2001 with his The Reckless Mind: Intellectuals in Politics. This compact work portrayed eight 20th century thinkers who rejected Western liberal democracy and aligned themselves with totalitarian regimes. Some were well known, such as German philosopher and Nazi sympathizer Martin Heidegger, but more were quite obscure to general readers.  He followed with another thought provoking work, The Stillborn God: Religion, Politics, and the Modern West, a study of “political theology,” the implications of secularism and the degree to which religion and politics have been decoupled in modern Europe.

          In his most recent work, The Shipwrecked Mind: On Political Reaction, Lilla probes the elusive and, in his view, understudied mindset of the political reactionary.  The first thing we need to understand about reactionaries, he tells us at the outset, is that they are not conservatives. They are “just as radical as revolutionaries and just as firmly in the grip of historical imaginings” (p.xii).  The mission of the political reactionary is to “stand athwart history, yelling Stop,” Lilla writes, quoting a famous line from the first edition of William F. Buckley’s National Review, a publication which he describes as “reactionary” (p.xiii). But the National Review is widely considered as embodying the voice of traditional American conservatism, an indication that the distinction between political reactionary and traditional conservative is not always clear-cut.  Lilla’s notion of political reaction overlaps with other terms such as “anti-modern” and the frequently used “populism.” He mentions both but does not draw out distinctions between them and political reaction.

            For Lilla, political reactionaries have a heightened sense of doom and maintain a more apocalyptic worldview than traditional conservatives. The political reactionary is driven by a nostalgic vision of an idealized, golden past and is likely to blame “elites” for the deplorable current state of affairs. The betrayal of elites is the “linchpin of every reactionary story” (p.xiii), he notes. In a short introduction, Lilla sets forth these definitional parameters and also traces the origins of our concept of political reaction to a certain type of opposition to the French Revolution and the 18th century Enlightenment.

          The nostalgia for a lost world “settled like a cloud on European thought after the French Revolution and never fully lifted” (p.xvi), Lilla notes. Whereas conservative Edmund Burke recoiled at the French Revolution’s wholesale uprooting of established institutions and its violence but were willing to admit that France’s ancien régime had grown ossified and required modification, quintessential reactionary Joseph de Maistre mounted a full-throated defense of the ancien régime.   For de Maistre, 1789 “marked the end of a glorious journey, not the beginning of one” (p.xii).

         If the reactionary mind has its roots in counter-revolutionary thinking, it endures today in the absence of political revolution of the type that animated de Maistre. “To live a modern life anywhere in the world today, subject to perpetual social and technological change, is to experience the psychological equivalent of permanent revolution,” Lilla writes (p.xiv). For the apocalyptic imagination of the reactionary, “the present, not the past, is a foreign country” (p.137). The reactionary mind is thus a “shipwrecked mind. Where others see the river of time flowing as it always has, the reactionary sees the debris of paradise drifting past his eyes. He is time’s exile” (p.xiii).

      The Shipwrecked Mind is not a systematic or historical treatise on the evolution of political reaction. Rather, in a disparate collection of essays, Lilla provides examples of reactionary thinking.  He divides his work into three main sections, “Thinkers,” “Currents,” and “Events.” “Thinkers” portrays three 20th century intellectuals whose works have inspired modern political reaction. “Currents” consists of two essays with catchy titles, “From Luther to Wal-Mart,” and “From Mao to St. Paul;” the former is a study of “theoconservatism,” reactionary religious strains found within traditional Catholicism, evangelical Protestantism, and neo-Orthodox Judaism; the latter looks at a more leftist nostalgia for a revolutionary past. “Events” contains Lilla’s reflections on the January 2015 terrorist attacks in Paris on the Charlie Hebdo publication and a kosher supermarket.  But like the initial “Thinkers” sections, “Currents” and “Events” are above all introductions to the works of reactionary thinkers, most of whom are likely to be unfamiliar to English language readers.

            The Shipwrecked Mind appeared at about the same time as the startling Brexit vote in the United Kingdom, a time when Donald Trump was in the equally startling process of securing the Republican Party’s nomination for the presidency of the United States. Neither Brexit nor the Trump campaign figures directly in Lilla’s analysis and  readers will therefore have to connect the dots themselves between his diagnosis of political reaction and these events. Contemporary France looms larger in his effort to explain the reactionary mind, in part because Lilla was in Paris at the time of the January 2015 terrorist attacks.

* * *

            “Thinkers,” Lilla’s initial section, is similar in format to The Reckless Mind, consisting of portraits of Leo Strauss, Eric Voeglin, and Franz Rosenzweig, three German-born theorists whose work is “infused with modern nostalgia” (p.xvii). Of the three, readers are most likely to be familiar with Strauss (1899-1973), a Jewish refugee from Germany whose parents died in the Holocaust. Strauss taught philosophy at the University of Chicago from 1949 up to his death in 1973. Assiduous tomsbooks readers will recall my review in January 2014 of The Truth About Leo Strauss: Political Philosophy and American Democracy, by Michael and Catherine Zuckert, which dismissed the purported connection between Strauss and the 2003 Iraq war as based on a failure to dig deeply enough into Strauss’ complex, tension ridden views about America and liberal democracy. Like the Zuckerts, Lilla considers the connection between Strauss and the 2003 Iraq war “misplaced” and “unseemly,” but, more than the Zuckerts, finds “quite real” the connection between Strauss’ thinking and that of today’s American political right (p.62).

        Strauss’ salience to political reaction starts with his view that Machiavelli, whom Strauss considered the first modern philosopher, is responsible for a decisive historical break in the Western philosophical tradition. Machiavelli turned philosophy from “pure contemplation and political prudence toward willful mastery of nature” (p.xviii), thereby introducing passion into political and social life. Strauss’ most influential work, Natural Right and History, argued that “natural justice” is the “standard by which political arrangements must be judged” (p.56). After the tumult of the 1960s, some of Strauss’ American disciples began to see this work as an argument that the West is in crisis, unable to defend itself against internal and external enemies. Lilla suggests that Natural Right and History has been misconstrued in the United States as an argument that political liberalism’s rejection of natural rights leads invariably to a relativism indistinguishable from nihilism. This misinterpretation led “Straussians” to the notion that the United States has a “redemptive historical mission — an idea nowhere articulated by Strauss himself” (p.61).

          Voeglin (1901-1985), a contemporary of Strauss, was born in Germany and raised in Austria, from which he fled in 1938 at the time of its Anchluss with Germany.   Like Strauss, he spent most of his academic career in the United States, where he sought to explain the collapse of democracy and the rise of totalitarianism in terms of a “calamitous break in the history of ideas, after which intellectual and political decline set in” (p.xviii). Voeglin argued that in inspiring the liberation of politics from religion, the 18th century Enlightenment gave rise in the 20th century to mass ideological movements such as Marxism, fascism and nationalism.  Voeglin considered these movements “’political religions,’ complete with prophets, priests, and temple sacrifices” (p.31). As Lilla puts it, for Voeglin, when you abandon the Lord, it is “only a matter of time before you start worshipping a Führer” (p.31).

        Rosenzweig (1886-1929) was a German Jew who gained fame in his time for backing off at the last moment from a conversion to Christianity – the equivalent of leaving his bride at the altar – and went on to dedicate his life to a revitalization of Jewish thought and practice. Rosenzweig shared an intellectual nostalgia prevalent in pre-World War I Germany that saw the political unification of Germany decades earlier, while giving rise to a wealthy bourgeois culture and the triumph of the modern scientific spirit, as having extinguished something essential that could “only be recaptured through some sort of religious leap.” (p.4). Rosenzweig rejected Judaism’s efforts to reform itself “according to modern notions of historical progress, which were rooted in Christianity” in favor of a new form of thinking that would “turn its back on history in order to recapture the vital transcendent essence of Judaism” (p.xvii-xviii).

          Lilla’s sensitivity to the interaction between religion and politics, the subject of The Stillborn God and the portraits of Voeglin and Rosenzweig here, is again on display in the two essays in the middle “Currents” section. In “From Luther to Wal-Mart,” Lilla explores how, despite doctrinal differences, traditional Catholicism, evangelical Protestantism, and neo-Orthodox Judaism in the United States came to share a “sweeping condemnation of America’s cultural decline and decadence.”  This “theoconservatism” (p.xix) blames today’s perceived decline and decadence on reform movements within these dominations and what they perceive as secular attacks on religion generally, frequently tracing the attacks to the turbulent 1960s as the significant breaking point in American political and religious history.

         Two works figure prominently in this section, Alastir MacInytre’s 1981 After Virtue, and Brad Gregory’s 2012 The Unintended Reformation. MacIntyre, echoing de Maistre, argued that the Enlightenment had undone a system of morality worked out over centuries, unwittingly preparing the way for “acquisitive capitalism, Nietzscheanism, and the relativistic liberal emotivism we live with today, in a society that that ‘cannot hope to achieve moral consensus’” (p.74-75). Gregory, inspired by MacIntyre, attributed contemporary decline and decadence in significant part to forces unleashed in the Reformation, undercutting the orderliness and certainty of “medieval Christianity,” his term for pre-Reformation Catholicism. Building on Luther and Calvin, Reformation radicals “denied the need for sacraments or relics,” and left believers unequipped to interpret the Bible on their own, leading to widespread religious conflict. Modern liberalism ended these conflicts but left us with the “hyper-pluralistic, consumer-driven, dogmatically relativististic world of today. And that’s how we got from Luther to Walmart” (p.78-79).

        “From St. Paul to Mao” considers a “small but intriguing movement on the academic far left” which maintains a paradoxical nostalgia for “revolution” or “the future,” and sees “deep affinities” between Saint Paul and modern revolutionaries such as Lenin and Chairman Mao (p.xx).  Jacob Taubes, a peripatetic Swiss-born Jew who taught in New York, Berlin, Jerusalem and Paris, sought to demonstrate in The Political Teachings of Paul that Paul was a “distinctively Jewish fanatic sent to universalize the Bible’s hope of redemption, bringing this revolutionary new idea to the wider world. After Moses, there was never a better Jew than Paul” (p.90). French theorist Alain Badiou, among academia’s last surviving Maoists, argued that Paul was to Jesus as Lenin was to Marx. The far left academic movement’s most prominent theorist is Nazi legal scholar Carl Schmitt, Hitler’s “crown jurist” (p.99), a thinker portrayed in The Reckless Mind who emphasized the importance of human capacity and will rather than principles of natural right in organizing society.

         The third section, “Currents,” considers  France’s simmering cultural war over the place of Islam in French society, particularly in the aftermath of the January 2015 terrorist attacks in Paris, which Lilla sees as a head-on collision between two forms of political reaction:

On the one side was the nostalgia of the poorly educated killers for an imagined, glorious Muslim past that now inspires dreams of a modern caliphate with global ambitions. On the other was the nostalgia of French intellectuals who saw in the crime a confirmation of their own fatalistic views about the decline of France and the incapacity of Europe to assert itself in the face of a civilizational challenge (p.xx).

        France’s struggle to integrate its Muslim population, Lilla argues, has revived a tradition of cultural despair and nostalgia for a Catholic monarchist past that had flourished in France between the 1789 Revolution and the fall of France in 1940, but fell out of favor after World War II because of its association with the Vichy government and France’s role in the Holocaust. In the early post-war decades in France, it was “permissible for a French writer to be a conservative but not a reactionary, and certainly not a reactionary with a theory of history that condemned what everyone else considered to be modern progress” (p.108). Today, it is once again permissible in France to be a reactionary.

          “Currents” concentrates on two best-selling works that manifest the revival of the French reactionary tradition, Éric Zemmour’s Le Suicide francais, published in 2014, and Michel Houellebecq’s dystopian novel, Submission, first published on the very day of the January 2015 Charlie Hebdo attacks, an “astonishing, almost unimaginable” coincidence (p.116). Le Suicide francais presents a “grandiose, apocalyptic vision of the decline of France” (p.108), with a broad range of culprits contributing to the decline, including feminism, multiculturalism, French business elites, and European Union bureaucrats. But Zemmour reserves particular contempt for France’s Muslim citizens.  Le Suicide francais provides the French right with a “common set of enemies,” stirring an “outraged hopelessness – which in contemporary politics is much more powerful than hope” (p.117).

         Submission is the story of an election in France of a Muslim President in 2022, with the support of France’s mainstream political parties which seek to prevent the far right National Front party from winning the presidency.  In Lilla’s interpretation, the novel serves to express a “recurring European worry that the single-minded pursuit of freedom – freedom from tradition and authority, freedom to pursue one’s own ends – must inevitably lead to disaster” (p.127).  France for Houellebecq “regrettably and irretrievably, lost its sense of self” as a result of wager on history made at the time of the Enlightenment that the more Europeans “extended human freedom, the happier they would be” (p.128-29). For Houellebecq, “by any measure France’s most significant contemporary writer” (p.109), that wager has been lost. “And so the continent is adrift and susceptible to a much older temptation, to submit to those claiming to speak for God”(p.129).

          Lilla’s section on France ends on this ominous note. But in an “Afterword,” Lilla returns to contemporary Islam, the other party to the head-on collision of competing reactionaries at work in the January 2015 terrorist attacks in Paris and their aftermath.  Islam’s belief in a lost Godden Age is the “most potent and consequential” political nostalgia in operation today (p.140), Lilla contends. According to radical Islamic myth, out of a state of jahiliyya, ignorance and chaos, the Prophet Muhammad was “chosen as the vessel of God’s final revelation, which uplifted all individuals and peoples who accepted it.” But, “astonishingly soon, the élan of this founding generation was lost. And it has never been recovered” (p.140). Today the forces of secularism, individualism, and materialism have “combined to bring about a new jahiliyya that every faithful Muslim must struggle against, just as the Prophet did at the dawn of the seventh century” (p.141).

* * *

          The essays in this collection add up to what Lilla describes as a “modest start” (p.xv) in probing  the reactionary mindset and are intriguing as far as they go. But I finished The Shipwrecked Mind hoping that Lilla will extend this modest start. Utilizing his extensive learning and formidable analytical skills, Lilla is ideally equipped to provide a systematic, historical overview of the reactionary tradition, an overview that would highlight its relationship to the French Revolution and the 18th century Enlightenment in particular but to other historical landmarks as well, especially the 1960s. In such a work, Lilla might also provide more definitional rigor to the term “political reactionary” than he does here, elaborating upon its relationship to traditional conservatism, populism, and anti-modernism.  Through what might be a separate work, Lilla is also well placed to help us connect the dots between political reaction and the turmoil generated by Brexit and the election of Donald Trump.  In less than six months, moreover, we will also know whether we will need to ask Lilla to connect dots between his sound discussion here of political reaction in contemporary France and a National Front presidency.

 

Thomas H. Peebles

La Châtaigneraie, France

January 5, 2017

 

 

,

 

6 Comments

Filed under Intellectual History, Political Theory, Religion

Refining the Rubric: tomsbooks@5

infidel-ayaan-hirsi-aliDreyfus Harris

     December 2016 marks the end of tomsbooks’ fifth year. On January 22, 2012, a time when I barely knew what a blog was, I posted a review of John McWhorter’s Doing Our Own Thing: The Degradation of Language and Music and Why We Should, Like, Care. That was the first of 93 postings over the next five years, reviewing 102 books (had I been counting, I probably would have paused last month to observe the 100th book reviewed, David Maraniss’ Once In a Great City: A Detroit Story, a work on my home town during my senior year in high school). The year-by-year tally of reviews is as follows:

Year          Reviews          Books Reviewed
2012             25                          32
2013             19                          20
2014              9                            9
2015            21                          22
2016            19                          19

From the beginning, my goal has been to point general, well educated but non-specialist readers to works unlikely to make best-seller lists but, in most cases, worthy of their consideration and reading time  — works often overlooked in major publications like The New York Times Book Review, The Los Angeles Times Book Review, The New York Review of Books or The London Review of Books.  Most of the books reviewed here fit into a rubric of “modern history, politics, and political theory,” with McWhorter’s work being perhaps one of the few exceptions.

     Earlier this month, I sought to refine that rubric through an index by specific subject matter of the five years of reviews, now completed and available upon request. Currently, I have 37 different categories (e.g. “French History,” “Thinkers,” “Biography/Autobiography,” “Cold War,”), with much overlap; almost all reviews fit into more than one category. Looking at the index makes clear that my overall focus has most frequently been on either the United States or Europe, in many cases both. But if America and Europe is my comfort zone, I have ventured outside of it on more than a few occasions to review books rooted in other areas. My most recent review was on sub-Saharan Africa, and I have reviewed books on Saudi Arabia, Iran, and Pakistan, plus several books on Islam (e.g., here, here and here).

     While producing 1-2 reviews per month over the past five years, I gradually came to the realization that I have also been refining the rubric of “modern history, politics, and political theory” in another, more substantive way. I now see that almost all the books reviewed here explore in one way or another the concept of modern liberal democracy, a concept you  can define in different ways. The word “liberal” should not be equated with that term as used in everyday political discourse in the United States; it’s more like the “liberal” in “liberal arts.” For me, broadly stated, liberal democracy is the system of representative government we’ve come to take for granted in the West, a system that seeks to maximize both individual liberty and equality among citizens, through free and fair elections, the rule of law, free but regulated markets, and respect for human rights. Liberal democracy is also decidedly pluralist, seeking to provide a channel for as many voices as possible to compete for influence in a free and orderly, if at times cacophonous, process.

     There is no single category for “democracy” or “liberal democracy” among the 37 specific categories in my index, and I can think of only one book reviewed here that addresses the subject head on, Timothy Ferris’ The Science of Liberty: Democracy, Reason, and the Laws of Nature, reviewed in May 2012 (and categorized in the index under “Political Theory,” and “Intellectual History’). But just about every book reviewed here necessarily addresses, however indirectly, some aspect of the subject of liberal democracy,  raising issues pertinent to its story in modern times: what is it; where does it come from; where is it going; how should it work; how does it work; what are its strengths and weaknesses, it successes and failures. Of course, this includes books on instances where liberal democracy has “gone off the rails,” most notably in Nazi Germany and Bolshevik Russia. By my count, I have reviewed 13 books on the Nazi period in Germany and 6 on communist rule in the Soviet Union, along with 10 on “totalitarianism,” the antithesis of liberal democracy.

     Winston Churchill once famously described democracy as the worst possible system of government, except for all the others (and he could have said “liberal democracy”). It may be difficult to disagree with Churchill’s quirky formulation, but I think a more generous one is warranted. I consider modern liberal democracy to be among the most uplifting and powerful ideas that our collective human civilization has put into practice over the last three centuries, providing unparalleled opportunity for high quality of life for individuals fortunate to live in liberal democratic states. But no one has to tell me that these are dispiriting times for liberal democracy in much of the world, starting very close to home.

* * *

The Science of Liberty, Ferrismichelleo-1SternandSifton

     I trace liberal democracy’s modern roots principally to Great Britain, the United States and France, three countries I have been lucky enough to live, work and study in. Perhaps because of this happy coincidence, the history and politics of these three countries are the starting point for my bookish interests. But in each today, in different ways, liberal democracy seems to be on the defensive, facing rising xenophobia, ethnic nationalism and raging populism. In the “Brexit” campaign tinged with no small doses of anti-immigrant sentiment, Great Britain notoriously elected in June of this year to pull out of the European Union, arguably the most critical multi-national liberal democracy project of the post-World War II era. The United States in November elected as its next President a man who seems at best indifferent to the ideals of liberal democracy, often hostile. In France, the National Front party, xenophobic, anti-immigrant, and anti-European, has what looks at this juncture like better than a 50-50 chance that its leader will reach the final round for the French presidency, in elections scheduled for May 2017.

     Beyond this core, the outlook for liberal democracy at the end of 2016 appears at least equally bleak. Anti-immigrant and anti-liberal parties are gaining elsewhere across Europe, in countries as diverse as the Netherlands, Austria, Hungary and Poland (although an insurgent anti-immigrant party in Austria recently suffered a setback in its bid for the country’s presidency). On Europe’s periphery, both Russia and Turkey have overtly embraced authoritarian, anti-democratic rule, sometimes explicitly referred to as “illiberal democracy.”  Meanwhile China, the world’s newest economic behemoth, continues the oddest of combinations, a form of free market capitalism coupled with strict control over individuals’ lives and the absence of the most basic political freedoms.

    I wistfully recall the optimism of the 1990s, when the Berlin Wall had fallen, the Soviet Union had collapsed, and apartheid had been consigned to the dustbin of South African history.  Liberal democracy seemed to be on the march throughout the world.  It was then described as the world’s “default option,” with one commentator famously declaring the “end of history.” In the second half of the 21st century’s second decade, this seems like a quaint, bygone era, far longer ago than a quarter of a century. Today, our global civilization appears to be heading into the darkest and most difficult period for liberal democracy since the 1930s. We can only hope that a catastrophe analogous to the world war that erupted at the end of that decade can be averted.

     Despite the multiple reasons for concern if not outright alarm about the near future, I remain “cautiously optimistic,” as the diplomats say, about the future of liberal democracy throughout the world. It is quite simply too powerful an idea to be bottled up over the long term. While older and more rural populations may favor authoritarian rulers who promise to restore some type of mythological past, the future is not with these demographic groups. Liberal democracy will continue to find strong and often courageous support among well-educated young people and in urban centers throughout the world — in Cairo and Teheran, Moscow and Ankara, as well as Paris, London and New York.

      A spate of books on the future of liberal democracy is likely to flood the market in the months and years ahead, including works analyzing the previously unimaginable rise to power in the United States of an authoritarian leader with no apparent attachment to America’s abiding democratic principles. Occasionally, I may elect to review such works, but I suspect that such instances will be rare. Rather, I envision continuing to address liberal democracy less directly, through reviews of serious works on history and politics which, often unintentionally, contain some insight, broad or narrow, about the perils and possibilities of liberal democracy. I hope you will stay with me for what looks like a bumpy ride ahead over the next months and years.

Thomas H. Peebles
La Châtaigneraie, France
December 24, 2016

6 Comments

Filed under Uncategorized