Blithe Optimist



Rick Perlstein, The Invisible Bridge:

The Fall of Nixon and the Rise of Reagan

     Rick Perlstein has spent his career studying American conservatism in the second half of the 20th century and its capture of the modern Republican Party. His first major work, Before the Storm: Barry Goldwater and the Unmaking of the American Consensus, was an incisive and entertaining study of Senator Barry Goldwater’s 1964 Republican Party nomination for the presidency and his landslide loss that year to President Lyndon Johnson. He followed with Nixonland: The Rise of a President and the Fracturing of America, a description of the nation at the time of Richard Nixon’s landslide 1972 victory over Senator George McGovern  — a nation divided by a cultural war between “mutually recriminating cultural sophisticates on the one hand and the plain, earnest ‘Silent Majority’ on the other” (p.xix). Now, in The Invisible Bridge: The Fall of Nixon and the Rise of Reagan, Perlstein dives into American politics between 1973 and 1976, beginning with Nixon’s second term and ending with the failed bid of the book’s central character, Ronald Reagan, for  the 1976 Republican Party presidential nomination.

     The years 1973 to 1976 included the Watergate affair that ended the Nixon presidency in 1974; the ultra-divisive issue of America’s engagement in Vietnam, which ended in an American withdrawal from that conflict in 1975; and the aftershocks from the cultural transformations often referred to as “the Sixties.” It was a time, Perlstein writes, when America “suffered more wounds to its ideal of itself than at just about any other time in its history” (p.xiii). 1976 was also the bi-centennial year of the signing of the Declaration of Independence, which the nation approached with trepidation. Many feared, as Perlstein puts it, that celebration of the nation’s 200 year anniversary would serve the “malign ideological purpose of dissuading a nation from a desperately needed reckoning with the sins of its past” (p.712).

     Perlstein begins by quoting advice Nikita Khrushchev purportedly provided to Richard Nixon: “If the people believe there’s an imaginary river out there, you don’t tell them there’s no river there. You build an imaginary bridge over the imaginary river.” Perlstein does not return to Khrushchev’s advice and, as I ploughed through his book, I realized that I had not grasped how the notion of an “invisible bridge” fits into his lengthy (804 pages!) narrative. More on that below. There’s no mystery, however, about Perlstein’s sub-title “The Fall of Nixon and the Rise of Reagan.”

     About one third of the book addresses Nixon’s fall in the Watergate affair and another third recounts Reagan’s rise to challenge President Gerald Ford for the 1976 Republican Party presidential nomination, including the year’s presidential primaries and the maneuvering of the Ford and Reagan presidential campaigns at the Republican National Convention that summer. The remaining third consists of biographical background on Reagan and his evolution from a New Deal liberal to a conservative Republican; an examination of the forces that were at work in the early 1970s to mobilize conservatives after Goldwater’s disastrous 1964 defeat; and Perlstein’s efforts to describe the American cultural landscape in the 1970s and capture the national mood, through a dazzling litany of vignettes and anecdotes. At times, it seems that Perlstein has seen every film that came to theatres in the first half of the decade; watched every television program from the era; and read every small and mid-size town newspaper.

     Perlstein describes his work as a “sort of biography of Ronald Reagan – of Ronald Reagan, rescuer” (p.xv) — rescuer, presumably, of the American psyche from the cultural convulsions of the Sixties and the traumas of Watergate and Vietnam that had shaken America’s confidence to the core. Perlstein considers Reagan to have been a gifted politician who exuded a “blithe optimism in the face of what others called chaos” (p.xvi), with an uncanny ability to simplify complex questions, often through stories that could be described as homespun or hokey, depending upon one’s perspective. Reagan was an “athlete of the imagination,” Perlstein writes, who was “simply awesome” at “turning complexity and confusion and doubt into simplicity and stout-heartedness and certainty” (p.48). This power was a key to “what made others feel so good in his presence, what made them so eager and willing to follow him – what made him a leader. But it was why, simultaneously, he was such a controversial leader” (p.xv).   Many regarded Reagan’s blithe optimism as the work of a “phony and a hustler” (p.xv). At bottom, Reagan was a divider and not a uniter, Perlstein argues, and “understanding the precise ways that opinions about him divided Americans . . . better helps us to understand our political order of battle today: how Americans divide themselves from one another” (p.xvi).

* * *

     In a series of biographical digressions, Perlstein demonstrates how Reagan’s blithe mid-western optimism served as the foundation for a long conversion to political conservatism.  Perlstein begins with Reagan’s upbringing in Illinois, his education at Illinois’ Eureka College, and his early years as a sportscaster in Iowa. Reagan left the mid-west in 1937 for Hollywood and a career in films, arriving in California as a “hemophiliac, bleeding heart liberal” (p.339). But, during his Hollywood years, Reagan came to see Communist Party infiltration of the film industry as a menace to the industry’s existence. He was convinced that Communist actors and producers had mastered the subtle art of making the free enterprise system look bad and thereby were undermining the American way of life.   Reagan became an informant for the FBI on the extent of Communist infiltration of Hollywood, a “warrior in a struggle of good versus evil – a battle for the soul of the world” (p.358), as Perlstein puts it. Reagan further came to resent the extent of taxation and viewed the IRS as a public enemy second only to Communists.

     Yet, Reagan remained a liberal Democrat through the 1940s. In 1948, he worked for President Truman’s re-election and introduced Minneapolis mayor Hubert Humphrey to a national radio audience. In 1952, Reagan supported Republican Dwight Eisenhower’s bid for the presidency. His journey toward the conservative end of the spectrum was probably completed when he became host in 1954 of General Electric’s “GE Theatre,” a mainstay of early American television. One of America’s corporate giants, GE’s self-image was of a family that functioned in frictionless harmony, with the interests of labor and management miraculously aligned. GE episodes, Perlstein writes, were the “perfect expression” of the 1950s faith that nothing “need ever remain in friction in the nation God had ordained to benevolently bestride the world” (p.395). Reagan and his blithe optimism proved to be a perfect fit with GE Theatre’s mission of promoting its brand of Americanism, based on low taxes, unchallenged managerial control, and freedom from government regulatory interference.

     In the 1960 presidential campaign, Reagan depicted the progressive reforms which Democratic nominee John Kennedy advocated as being inspired by Karl Marx and Adolf Hitler. Richard Nixon, Kennedy’s rival, noted Reagan’s evolution and directed his staff to use Reagan as a speaker “whenever possible. He used to be a liberal” (p.374). By 1964, Reagan had become a highly visible backer of Barry Goldwater’s presidential quest, delivering a memorable speech in support of the candidate at the Republican National Convention. Reagan went on to be elected twice as governor of California, in 1967 and 1971.

     While governor, Reagan consistently argued for less government.  Our highest national priority, he contended at a national governors’ conference in 1973, should be to “halt the trend toward bigger, more expensive government at all levels before it is too late . . . We as citizens will either master government as our servant or ultimately it will master us” (p.160). Almost alone among conservatives, Reagan projected an image of a “pleasant man who understands why people are angry” (p.604), as one commentator put it. He gained fame if not notoriety during his tenure as governor for his hard line opposition to student protesters, particularly at the University of California’s Berkeley campus, attracting scores of working class Democrats who had never previously voted for a Republican. “Part of what made Berkeley [student unrest] such a powerful issue for traditionally Democratic voters was class resentment – something Ronald Reagan understood in his bones” (p.83).

     Early in Reagan’s second term as California’s governor, on June 17, 1972, four burglars were caught attempting to break into the Democratic national headquarters in Washington’s Watergate office and apartment complex. Throughout the ensuing investigation, Reagan seemed indifferent to what Time Magazine termed “probably the most pervasive instance of top-level misconduct in [American] history” (p.77).

* * *

     Watergate to Reagan was part of the usual atmosphere of campaigning, not much more than a prank.  Upon first learning about the break-in, he quipped that the Democrats should be happy that someone considered their documents worth reading. Throughout the investigation into corruption that implicated the White House, Reagan maintained a stubborn “Christian charity to a a fallen political comrade” (p.249). The individuals involved, he argued, were “not criminals at heart” (p.81). He told conservative commentators Rowland Evans and Robert Novak that he found “no evidence of criminal activity” in Watergate, which was why Nixon’s detractors were training their fire on “vague areas like morality and so forth” (p.249-50). Alone among political leaders, Reagan insisted that Watergate “said nothing important about the American character” (p.xiv).

     Thus, few were surprised when Reagan supported President Gerald Ford’s widely unpopular presidential pardon of Nixon for any crimes he might have committed related to Watergate, issued one month after Nixon’s resignation. Nixon had already suffered “punishment beyond anything any of us could imagine” (p.271), Reagan argued. Ford’s pardon of Nixon dissipated the high level of support that he had enjoyed since assuming the presidency, sending his public approval ratings from near record highs to near new lows. Democrats gained a nearly 2-1 advantage in the House of Representatives in the 1974 mid-term elections and Reagan’s party “seemed near to death” (p.329).

     As Ford’s popularity waned, Reagan saw an opportunity to challenge the sitting president. He announced his candidacy in November 1975. Reagan said he was running against what he termed a “buddy system” in Washington, an incestuous network of legislators, bureaucrats, and lobbyists which:

functions for its own benefit – increasingly insensitive to the needs of the American worker, who supports it with his taxes. . . I don’t believe for one moment that four more years of business as usual in Washington is the answer to our problems, and I don’t believe the American people believe it, either (p.547).

With Reagan’s bid for the 1976 Republican nomination, Perlstein’s narrative reaches its climatic conclusion.

* * *

     The New York Times dismissed the presidential bid as an “amusing but frivolous Reagan fantasy” and wondered how Reagan could be “taken so seriously by the news media” (p.546). Harper’s termed Reagan the “Candidate from Disneyland” (p.602), labeling him “Nixon without the savvy or self pity. . . That he should be regarded as a serious candidate for President is a shame and embarrassment” (p.602). Commentator Garry Wills responded to Reagan’s charge that the media was treating him unfairly by conceding that it was indeed “unfair to expect accuracy or depth” from Reagan (p.602). But, as Perlstein points out, these comments revealed “more about their authors than they did about the candidate and his political prospects” (p.602), reflecting what he terms elsewhere the “myopia of pundits, who so frequently fail to notice the very cultural ground shifting beneath their feet” (p.xv).

     1976 proved to be the last year either party determined its nominee at the convention itself, rather than in advance. Reagan went into the convention in Kansas City as the most serious threat to an incumbent president since Theodore Roosevelt had challenged William Howard Taft for the Republican Party nomination in 1912. His support in the primaries and at the convention benefitted from a conservative movement that had come together to nominate Barry Goldwater in 1964, a committed “army that could lose a battle, suck it up, and then regroup to fight a thousand battles more” (p.451) — “long memoried elephants” (p.308), Perlstein terms them elsewhere.

     In the years since the Goldwater nomination, evangelical Christians had become more political, moving from the margins to the mainstream of the conservative movement. Evangelical Christians were behind an effort to have America declared officially a “Christian nation.” Judicially-imposed busing of school students to achieve greater racial balance in public schools precipitated a torrent of opposition in cities as diverse as Boston, Massachusetts and Louisville, Kentucky – the Boston opposition organization was known as ROAR, Restore our Alienated Rights. Perlstein also traces the conservative reaction to the Supreme Court’s 1973 Roe v. Wade decision, which recognized a constitutional right to abortion. The 1976 Republican party platform for the first time recommended a Human Rights amendment to the constitution to reverse the decision.

     Activist Phyllis Schlafly, who died just weeks ago, led a movement to derail the proposed Equal Rights Amendment, intended to establish gender equality as a constitutional mandate. Schafly’s efforts contributed to stopping the proposed amendment at a time when approval of only three additional states would have officially adopted the amendment as part of the federal constitution (“Don’t Let Satan Have Its Way – Stop the ERA” was the opposition slogan, as well as Perlstein’s title for a chapter on the subject). Internationally, conservatives opposed the Ford administration’s intention to relinquish to Panama control of the Panama Canal; and the policy of détente toward the Soviet Union which both the Nixon and Ford administrations pursued.

     Enabling the long-memoried elephants was Richard Viguerie, a little known master of new technologies for fund-raising and grass roots get-out-the-vote campaigns. Conservative opinion writers like Patrick Buchanan, former Nixon White House Communications Director, and George Will also enjoyed expanded newspaper coverage. A fledgling conservative think tank based in Washington, the Heritage Foundation, became a repository for combining conservative thinking and action. The Heritage Foundation assisted a campaign in West Virginia to purge school textbooks of “secular humanism.”

     With the contest for delegates nearly even as the convention approached, Reagan needed the support of conservatives for causes like these. But Reagan also realized that limited support from centrist delegates could prove to be his margin of difference. In a bid to attract such delegates, especially from the crucial Pennsylvania delegation, Reagan promised in advance of the convention to name Pennsylvania Senator Richard Schweiker as his running mate. Schweiker came from the moderate wing of the party, with a high rating from the AFL-CIO. But the move backfired, infuriating conservatives — North Carolina Senator Jesse Helms in particular — with few moderate delegates switching to Reagan.   Then, Reagan’s supporters proposed a change to the convention’s rules that would have required Ford to announce his running mate prior to the presidential balloting, forcing Ford to anger either the moderate or conservative faction of the party. Ford supporters rejected the proposal, which lost on the full floor after a close vote.

     The 150 delegates of the Mississippi delegation proved to be crucial in determining the outcome of the convention’s balloting. When the Mississippi delegation cast its lot with Ford, the president had a sufficient number of delegates to win the nomination on the first ballot, 1187 votes to 1070 for Reagan. Ford selected Kansas Senator Robert Dole as his running mate, after Vice President Nelson Rockefeller, whom conservatives detested, announced the previous fall that he did not wish to be a candidate for Vice President. Anxious to achieve party unity, Ford invited Reagan to join him on the platform following his acceptance speech. Reagan gave an eloquent impromptu speech that many thought overshadowed Ford’s own acceptance address.

* * *

     Perlstein includes a short, epilogue-like summation to the climatic Kansas City convention: Ford went on to lose to Democratic governor from Georgia Jimmy Carter in a close 1976 general election and Reagan emerged as the undisputed leader of his party’s conservative wing. But as the book ended, I found myself still asking how the notion of an “invisible bridge” fits into this saga. My best guess is that the notion is tied to Perlstein’s description of Reagan as a “rescuer.”  Reagan’s failed presidential campaign was a journey across a great divide – over an invisible bridge.

     On the one side were Watergate, the Vietnam War, repercussions from the Sixties and, for conservatives, Goldwater’s humiliating 1964 defeat. On the other side was the promise of an unsullied way forward.  Reagan’s soothing cult of optimism offered Americans a message that could allow them to again view themselves and their country positively.  There were no sins that Reagan’s America need atone for. Usually dour and gloomy conservatives — Perlstein’s “long memoried elephants” — also saw in Reagan’s buoyant   message the discernible path to power that had eluded them in 1964.. But, as Perlstein will likely underscore in a subsequent volume, many still doubted whether the blithe optimist had the temperament or the intellect to be president, while others suspected that his upbeat brand of conservatism could no more be sold to the country-at-large than the Goldwater brand in 1964.

Thomas H. Peebles

La Châtaigneraie, France

October 2, 2016





Filed under American Politics, American Society, Biography

Becoming FLOTUS



Peter Slevin, Michelle Obama: A Life 

             In Michelle Obama: A Life, Peter Slevin, a former Washington Post correspondent presently teaching at Northwestern University, explores the improbable story of Michelle LaVaughn Robinson, now Michelle Obama, the First Lady of the United States (a position known affectionately in government memos as “FLOTUS”). Slevin’s sympathetic yet probing biography shows how Michelle’s life was and still is shaped by the blue collar, working class environment of Chicago’s South Side, where she was born and raised. Michelle’s life in many ways is a microcosm of 20th century African-American experience. Michelle’s ancestors were slaves, and her grandparents were part of the “Great Migration” of the first half of the 20th century that sent millions of African-Americans from the rigidly segregated south to northern urban centers in search of a better life.  Michelle was born in 1964, during the high point of the American civil rights movement, and is thus part of the generation that grew up after that movement had widened the opportunities available to African Americans.

            The first half of the book treats Michelle’s early life as a girl growing up on the South Side of Chicago and her experiences as an African-American at two of America’s ultra-elite institutions, Princeton University and Harvard Law School.  The centerpiece of this half is the loving environment that Michelle’s parents, Fraser Robinson III and his wife Marian Shields Robinson, created for Michelle and her older brother Craig, born two years earlier in 1962.  The Robinson family emphasized the primacy of education as the key to a better future, along with hard work and discipline, dedication to family, regular church attendance, and community service.

            Michelle’s post-Harvard professional and personal lives form the book’s second half. Early in her professional career, Michelle met a young man from Hawaii with an exotic background and equally exotic name, Barack Hussein Obama. Slevin provides an endearing account of their courtship and marriage (their initial date is also the subject of a recent movie “Southside With You”). Once Barack enters the scene, however, the story becomes as much about his entry and dizzying rise in politics as it is about Michelle, and thus likely to be familiar to many readers.

            But in this half of the book, we also learn about Michelle’s career in Chicago; how she balanced her professional obligations with her parental responsibilities; her misgivings about the political course Barack seemed intent upon pursuing; her at first reluctant, then full throated support for Barack’s long-shot bid for the presidency; and how she elected to utilize the platform which the White House provided to her as the FLOTUS.  Throughout, we see how Michelle retained the values of her South Side upbringing.

* * *

        Slevin provides an incisive description of 20th century Chicago, beginning in the 1920s, when Michelle’s grandparents migrated from the rural south.  He emphasizes the barriers that African Americans experienced, limiting where they could live and work, their educational opportunities, and more. Michelle’s father Fraser, after serving in the U.S. army, worked in a Chicago water filtration plant up to his death in 1991 from multiple sclerosis at age 55. Marian, still living (‘the First Grandmother”), was mainly a “stay-at-home Mom.”  In a city that “recognized them first and foremost as black,” Fraser and Marian refused to utilize the oppressive shackles of racism as an excuse for themselves or their children.  The Robinson parents “saw it as their mission to provide strength, wisdom, and a measure of insulation to Michelle and Craig” (p.26). Their message to their children was that no matter what obstacles they faced because of their race or their working class roots, “life’s possibilities were unbounded. Fulfillment of those possibilities was up to them. No excuses” (p.47).

     The South Side neighborhood where Michelle and Craig were raised, although part of Chicago’s rigidly segregated housing patterns, offered a stable and secure environment, with well-kept if modest homes and strong neighborhood schools. The neighborhood and the Robinson household provided Michelle and Craig with what Craig later termed the “Shangri-La of upbringings” (p.33).  Fraser and Marian both regretted deeply that they were not college graduates. The couple consequently placed an unusually high premium on education for their children, adopting a savvy approach which parents today would be wise to emulate.

      Learning to read and write  for the two Robinson children was a means toward the even more important goal of learning to think. Fraser and Marian advised their children to “use their heads, yet not to be afraid to make mistakes – in each case learning from what goes wrong” (p.46).  We told them, Marian recounted, “Make sure you respect your teachers, but don’t hesitate to question them. Don’t even allow even us to say just anything to you” (p.47). Fraser and Marian granted their children freedom to explore, test ideas and make their own decisions, but always within a framework that emphasized “hard work, honesty, and self-discipline. There were obligations and occasional punishment. But the goal was free thinking” (p.46).

       Both Robinson children were good students, but with diametrically opposite study methods. Michelle was methodical and obsessive, putting in long hours, while Craig largely coasted to good grades. Michelle went to Princeton in part because Craig was already a student there, but she did so with misgivings and concerns that she might not be up to its high standards. Prior to Princeton, Craig and Michelle had had little exposure to whites. If they experienced animosity in their early years, Slevin writes, it was “likely from African American kids who heard their good grammar, saw their classroom diligence, and accused them of ‘trying to sound white’” (p.49). At Princeton, however, a school which “telegraphed privilege” (p.71), Michelle began a serious contemplation of what it meant to be an African-American in a society where whites held most of the levers of power.

       As an undergraduate between 1982 and 1986, Michelle came to see a separate black culture existing apart from white culture. Black culture had its own music, language, and history which, as she wrote in a college term paper, should be attributed to the “injustices and oppressions suffered by this race of people which are not comparable to the experience of any other race of people through this country’s history” (p.91). Michelle observed that black public officials must persuade the white community that they are “above issues of race and that they are representing all people and not just Black people” (p.91-92). Slevin notes that Michelle’s description “strikingly foreshadowed a challenge that she and her husband would face twenty two years later as they aimed for the White House” (p.91). Michelle’s college experience was a vindication of the framework Fraser and Marian had created that allowed Michelle to flourish. At Princeton, Michelle learned that the girl from blue collar Chicago could “play in the big leagues” (p.94), as Slevin puts it.

            In the fall of 1986, Michelle entered Harvard Law School, another “lofty perch, every bit as privileged as Princeton, but certainly more competitive once classes began” (p.95). In law school, she was active in an effort to bring more African American professors to a faculty that was made up almost exclusively of white males. She worked for the Legal Aid Society, providing services to low income individuals. When she graduated from law school in 1989, she returned to Chicago – it doesn’t seem that she ever considered other locations. But, notwithstanding her activist leanings as a student, she chose to work as an associate in one of Chicago’s most prestigious corporate law firms, Sidley and Austin.

       Although located only a few miles from the South Side neighborhood where Michelle had grown up, Sidley and Austin was a world apart, another bastion of privilege, with some of America’s best known and most powerful businesses as its clients. The firm offered Michelle the opportunity to sharpen her legal skills, particularly in intellectual property protection and, at least equally importantly, pay off some of her student loans. But, like many idealistic young law graduates, she did not find work in a corporate law firm satisfying and left after two years.

        Michelle landed a job with the City of Chicago as an assistant to Valerie Jarret, then the City of Chicago’s Commissioner for Planning and Economic Development, who later became a valued White House advisor to President Obama. Michelle’s position was more operational than legal, serving as a “trouble shooter” with a discretionary budget that could be utilized to advance city programs at the neighborhood level on subjects as varied as business development, infant mortality, mobile immunization, and after school programs. But working for the City of Chicago was nothing if not political, and Michelle left after 18 months to take a position in 1993 at the University of Chicago, located on Chicago’s South Side, not far from where she grew up.

    Although still another of America’s most prestigious educational institutions, the University of Chicago had always seemed like hostile territory to Michelle, incongrous with its surrounding low and middle-income neighborhoods. But Michelle landed a position with a university program, Public Alliance, designed to improve the University’s relationship with the surrounding communities. Notwithstanding her lack of warm feelings for the university, the position was an excellent fit.  It afforded Michelle the opportunity to try her hand at bridging some of the gaps between the university and its less privileged neighbors.

          After nine years  with Public Allies, Michelle took a position in 2002 with the University of Chicago Hospital, again involved in public outreach, focused on the way the hospital could better serve the medical needs of the surrounding community. This position, Slevin notes, brought home to Michelle the massive inequalities within the American health care system, divided between the haves with affordable insurance and the have nots without it.  Michelle stayed in this position until early 2008, when she left to work on her husband’s long shot bid for the presidency. In her positions with the city and the university, Michelle developed a demanding leadership style for her staffs that she brought to the White House: result-oriented, given to micro-management, and sometimes “blistering” (p.330) to staff members whose performance fell short in her eyes.

* * *

       While working at Sidley and Austin, Michelle interviewed the young man from Hawaii, then in his first year at Harvard Law School, for a summer associate position. Michelle in Slevin’s account found the young man “very charming” and “handsome,” and sensed that, as she stated subsequently, he “liked my dry sense of humor and my sarcasm” (p.121). But if there was mutual attraction, it was the attraction of opposites. Barack Obama was still trying to figure out where his roots lay. Michelle Robinson, quite obviously, never had to address that question. Slevin notes that the contrast could “hardly have been greater” between Barack’s “untethered life and the world of the Robinson and Shields clans, so numerous and so firmly anchored in Chicago. He felt embraced and it surprised him” (p.128; Barack’s untethered life figures prominently in Janny Scott’s biography of Barack’s mother, Ann Dunham, reviewed here in July 2012).  For Barack, meeting the Robinson family for the first time was, as he later wrote, like “dropping in on the set of Leave It to Beaver” (p.127).  The couple married in 1992.

        Barack served three 2-year terms in the Illinois Senate, from 1997 to 2004. In 2000, he ran unsuccessfully for the United States House of Representatives, losing in a landslide. He had his breakthrough moment in 2004, when John Kerry, the Democratic Presidential candidate, invited him to deliver a now famous keynote address to that year’s Democratic National Convention.  Later that year, he won  a vacant seat in the United States Senate  by a landslide when his Republican opponent had to drop out due to a sex scandal.  In early 2007, he decided to run for the presidency.

       Michelle’s mistrust of politics was “deeply rooted and would linger long into Barack’s political career” (p.161), Slevin notes.  Her distrust was at the root of discernible frictions within their marriage, especially after their daughters were born — Malia in 1998 and Sasha in 2001. Barack’s political campaigning and professional obligations kept him away from home much of the time, to Michelle’s dismay. Michelle felt that she had accomplished more professionally than Barack, and was also saddled with parental duties in his absence. “It sometimes bothered her that Barack’s career always took priority over hers. Like many professional women of her age and station, Michelle was struggling with balance and a partner who was less involved – and less evolved – than she had expected” (p.180-81).

        Michelle was, to put it mildly, skeptical when her husband told her in 2006 that he was considering running for the presidency. She worried about further losing her own identity, giving up her career for four years, maybe eight, and living with the real possibility that her husband could be assassinated. Yet, once it became apparent that Barack was serious about such a run and had reached the “no turning back” point, Michelle was all in.  She became a passionate, fully committed member of Barack’s election team, a strategic partner who was “not shy about speaking up when she believed the Obama campaign was falling short” (p.219).

         With Barack’s victory over Senator John McCain in the 2008 presidential election, Michelle became what Slevin terms the “unlikeliest first lady in modern history” (p.4). The projects and messages she chose to advance as FLOTUS “reflected a hard-won determination to help working class and the disadvantaged, to unstack the deck. She was more urban and more mindful of inequality than any first lady since Eleanor Roosevelt” (p.5). Michelle reached out to children in the less favored communities in Washington, mostly African American, and thereafter to poor children around the world. She also concentrated on issues of obesity, physical fitness and nutrition, famously launching a White House organic vegetable garden. She developed programs to support the wives of American military personnel deployed in Iraq and Afghanistan, women struggling to “keep a toehold in the middle class” (p.293).

        In Barack’s second term, she adopted a new mission, called Reach Higher, which aimed to push disadvantaged teenagers toward college. Throughout her time as FLOTUS, Michelle tried valiantly to provide her two daughters with as close to a normal childhood as life in the White House bubble might permit. Slevin’s account stops just prior to the 2014 Congressional elections, when the Republicans gained control of the United States Senate, after gaining control of the House of Representatives in the prior mid-term elections in 2010.

       Slevin does not overlook the incessant Republican and conservative critics of Michelle. She appeared to many whites in the 2008 campaign as an “angry black woman,” which Slevin dismisses as a “simplistic and pernicious stereotype” (p.236). Right wing commentator Rush Limbaugh began calling her “Moochelle,” much to the delight of his listening audience. The moniker conjured images of a fat cow or a leech – synonymous with the term “moocher” which Ayn Rand used in her novels to describe those who “supposedly lived off the hard work of the producers” (p.316) — all the while slyly associating Michelle with “big government, the welfare state, big-spending Democrats, and black people living on the dole” (p.315).  Vitriol such as this, Slevin cautiously concludes, “could be traced to racism and sexism or, at a charitable minimum, a lack of familiarity with a black woman as accomplished and outspoken as Michelle” (p.286). In addition, criticism emerged from the political left, which “viewed Michelle positively but asked why, given her education, her experience, and her extraordinary platform, she did not speak or act more directly on a host of progressive issues, whether abortion rights, gender inequity, or the structural obstacles facing the urban poor” (p.286).

* * *

       Slevin’s book is not hagiography. As a conscientious biographer whose credibility is directly connected to his objectivity, Slevin undoubtedly looked long and hard for the Michelle’s weak points and less endearing qualities. He did not come up with much, unless you consider being a strong, focused woman a negative quality. There is no real dark side to Michelle Obama in Slevin’s account, no apparent skeletons in any of her closets. Rather, the unlikely FLOTUS depicted here continues to reflect the values she acquired while growing up in Fraser and Marian Robinson’s remarkable South Side household.


Thomas H. Peebles

La Châtaigneraie, France

September 17, 2016







Filed under American Politics, American Society, Biography, Gender Issues, Politics, United States History

Mid-Life Embrace of Judaism



Steven Gimbel, Einstein: His Space and Time 

            In Einstein: His Space and Time, Steven Gimbel, a professor of philosophy at Gettysburg College, offers a highly compact biography of Albert Einstein (1879-1955), well under 200 pages. With numerous Einstein biographies already available, Gimbel’s special angle lies in his emphasis upon Einstein’s Jewish roots – fittingly, since the work is one in the Yale University Press series “Jewish Lives” (and I can’t help wondering whether the editors of the series might be tempted to rename the series “Jewish Lives Matter”). Einstein was born into a Jewish family that Gimbel describes as “anti-observant” rather than simply “non-observant” (p.8). In 1896, as a 17 year old, Einstein repudiated his Jewish heritage at the same time that he renounced his German citizenship. But he embraced Judaism enthusiastically in the 1920s, when he was over 40 years old, realizing that his Jewish heritage was an “inalienable part of who he was and who he was perceived to be” (p.4). As an adult, Einstein lived in Switzerland, Germany, and the United States — along with a short stint in Prague – but disdained the notion of national identity and was never really at home anywhere. In Gimbel’s account, Einstein’s midlife embrace of Judaism provided him with a sense of rootedness he failed to find in national identity or the places he lived.

     Gimbel provides a sharp chronological structure to his overview of Einstein’s life, dividing his book  into four major segments: Einstein’s  early years, from his birth in Ulm, Germany in 1879, to 1905, when he received his PhD degree in physics while working as an examiner in Switzerland’s patent office in Bern; 1905 to 1920, when he rose from the obscurity of a patent officer to international acclaim through his breakthrough theories altering the way we look at space, time, and the universe, to borrow from Gimbel’s subtly clever sub-title; 1920 to 1933, when Einstein embraced Judaism during the Weimar Republic, Germany’s experiment in liberal democracy established after the shock of its defeat in World War I; and 1933-55, beginning with Hitler’s rise to power in Germany and Einstein’s decision to leave Germany for the United States, where he remained until  his death.  Gimbel’s discussion of the major theories in physics that made Einstein a world famous scientist in his own day and a nearly mythological figure today is limited and laudably designed to be understandable to the average reader. Some readers may nonetheless find these portions of this concise volume slow going. But few should experience any such challenges in absorbing Gimbel’s highly readable account of how Einstein’s Jewish heritage shaped his views of the world and the universe.

* * *

    Einstein’s father, Hermann Einstein, was a salesman and engineer.  His mother,  Pauline Koch, was a “stay-at-home mom” in today’s parlance whom Gimbel describes as “[s]trong-willed, strong-minded, and sharp tongued” (p.7-8).  In 1880, when Albert was one year old, his parents moved from Ulm to Munich, where young Albert entered a Catholic school a few years later. He began to play the violin at age six and throughout his life considered music “spiritual in the deepest sense” (p.13). When Einstein was in his teens, his family left Munich to pursue business opportunities in Italy. Einstein finished  secondary school in Aarau Switzerland, at the Arovian cantonal gymnasium.

     To avoid military service, Einstein renounced his German citizenship and surrendered his German passport in 1896, ostentatiously renouncing Judaism at that same time.  Later that year, he enrolled at the Swiss Federal Institute of Technology (ETH in German) in Zurich, studying math and physics. Zurich in Einstein’s university days was a “cosmopolitan playground filled with young people from across the Continent,” where radical new ideas were “in the air, and a sense of openness abounded” (p.22).

     Einstein’s future wife, Mileva Marić, a Serbian national, also enrolled at ETH in 1896. Mileva was the only woman in the math and physics section of the school. She was somewhat like Einstein’s mother, Gimbel indicates, “smart, sarcastic and strong willed” (p.23), with a passion for physics that rivaled that of Einstein. Her friendship with Einstein transformed into romance during their four years together at ETH. Unlike Einstein, however, who was awarded his degree in 1900, Mileva, did not achieve a sufficient level in her studies to warrant a degree. Gimbel describes the Einstein who left ETH in 1900 as a “complicated personality,” brimming with self-confidence and a “strange combination of arrogance and empathy” (p.73). But the young physics graduate searched for work for nearly two years before securing a job as an assistant examiner in the Federal Office for Intellectual Property in Bern, where he evaluated patent applications.

     Sometime prior to 1903, Mileva became pregnant and went back to Serbia to have the baby, named Liserl. It is not clear what happened to Liserl. As Gimbel explains:

The custom at that time was for the children of unmarried parents to be adopted, usually by a family member or a close friend.  This seems to have occurred, as news of Liesrl continued in correspondence for a little while.  Mileva moved back to Zurich, where she received word that Liersrl had contracted scarlet fever.  We do not know whether she survived. . . but we do know that Einstein never met his daughter (p.30).

Einstein and Mileva married in 1903, and the couple had two sons, Hans Albert and Edouard.

* * *

     While working in the patent office, Einstein studied at the University of Zurich for the PhD degree, which he earned in 1905. In a chapter entitled “The Miracle Year,” Gimbel explains how, in March, April and May of 1905, Einstein published three groundbreaking papers which provided new, revolutionary ways to view matter, light and space. At that time, Issac Newton’s late 17th century mechanical view of the universe as composed of space, time, motion, mass and energy was the entrenched bedrock of physics upon which to build and expand. Newton’s laws of motion and universal gravitation had “explained the falling of apples and the orbits of planets, the motion of comets and the rising of the tides” (p.59). His work was considered the “highest expression of the human mind in all recorded history” (p.59).

     Einstein demonstrated in 1905 the centrality of the atom to all of physics. Many physicists in the early 20th century did not accept theories of physics based on the atomic view of matter. Einstein’s work on atoms “got to the basic constituents of matter and accounted for the concepts of heat in thermodynamics” (p.67). In addition, Einstein presented a new picture of light as a force of constant speed. He contended that physics must “take as a starting point that the speed of light in a vacuum is always the same for all observers, no matter their state of motion with regard to the source” (p.54). The speed of light is “not only a constant, it is also a limiting velocity. Nothing can move faster than this speed. . . moving faster than the speed of light would require an infinite amount of energy, and that is not possible. Nothing can move faster than light in a vacuum” (p.57).

     Einstein’s work on light “revolutionized optics” (p.67). It led Einstein to establish the equivalence of mass and energy, as captured in the famous equation E = MC2, where the mass of the body is a measure of its energy content. Einstein’s three 1905 papers, Gimbel writes, left “no single part of the study of physics, the oldest and most established science, which Eistein did not seek to completely overhaul” (p.59).   Yet, the papers of Einstein’s miracle year failed to attract significant attention, in part because they came from an obscure 26-year-old patent examiner, not a recognized academic physicist.

     Einstein spent the succeeding years looking for a teaching position and over the course of the next decade became an academic vagabond. He found positions in Bern, Zurich, and Prague before returning to Germany in 1914, where he became director of the Kaiser Wilhelm Institute for Physics and professor at the Humboldt University of Berlin.  1914 was also the fateful year when World War I broke out. At the start of the war, Einstein saw his “worst fears regarding the German character coming true. Not only was there a sense that offensive military adventures were justified in the name of German ascendance, but there was near universal support for them” (p.91).

     Yet, the World War I years were among Einstein’s most productive. One hundred years ago this year, in 1916, Einstein published “The Foundation of the General Theory of Relativity,” in which his signature theory of relativity jelled — a “radical revision of our understanding of the nature of the universe itself” (p.89). At the heart of the theory was the notion that the “laws of physics should be the same for all observers who are moving at a constant speed in a straight line with respect to each other” (p.54). Gimbel terms Einstein’s insight a “triumph of elegance and imagination . . . Isaac Newton’s theory of gravitation, space, time, and motion had dominated physics for three hundred years, standing as the single greatest achievement in the history of science. Here was its successor” (p.89).

     Einstein’s theory of relativity attracted world attention in a way that his 1905 papers had not quite done. With the European powers at war with one another, Einstein’s theories of space, time and the universe “caught the fancy of a world tired of thinking about mankind as barbarians and eager to celebrate its creativity and insight. And at the center of it was this curious, unkempt, wisecracking figure who seemed to stand for a different side of humanity” (p.100).

* * *

     As the European powers fought World War I, Einstein began an affair with his cousin, Elsa Löwenthal, a divorced mother of two daughters. Mileva returned to Zurich with the couple’s two sons after discovering the affair, and she and Einstein divorced in 1919. Months later, Einstein married Elsa. In Elsa, Einstein saw the opposite of Mileva. Whereas Mileva sought to be a gender-barrier-breaking pioneer and Einstein’s intellectual partner, Elsa, with her “simple charm” and “sunny disposition” put her cousin on a pedestal, “never invading his work but instead caring for his more basic needs” (p.95). Until her death in 1936, Elsa assumed a role which Gimbel describes as Einstein’s “business manager,” serving as his gatekeeper and screening the many people “clamoring to have face time, interviews, and collaboration”with  her husband (p.95).

     In Weimar Germany in the 1920s, Einstein became what Gimbel describes as a symbol of “scientific cosmopolitanism. He was adored, inspiring poems and architecturally bizarre buildings. His science, combined with his politics during the war, gave him the status of the wise elder statesman among young rebels. The fact that people did not understand his theory of relativity did not diminish his social capital; to the contrary, it increased it. By being the keeper of the mystery, he was considered the high priest of modernism” (p.113). But a toxic anti-Semitism plagued Weimar Germany from the beginning, from which even non-observant Jews like Einstein were not immune.

* * *

     During the Weimar years, Einstein began to “view his Jewishness in a new light” (p.109). He was able, as Gimbel puts it, to “become Jewish again in his own mind without having to surrender the scientific world view, the personal ethic, or the metaphysical foundations upon which he rested his physical theories. Being Jewish became . . . an inalienable aspect of his being” (p.109). Weimar anti-Semitism no doubt played a role in leading Einstein to the view that the experiences of Jews everywhere had “core commonalities that united them into a nation” (p.121). Einstein’s rediscovery of his Jewish roots in the early 1920s thus awakened his interest in Zionism, with its aspiration for a Jewish community in Palestine, an aspiration which Einstein had previously resisted.

     Zionism was “not a natural fit for Einstein, who, to the core of his being, opposed every form of nationalism” (p.121). Einstein worried that Zionism would “rob Judaism of its moral core. . . If Zionism became a movement that was focused on the idolatry of a particular piece of land, then the emergence of all of the evils that have plagued Jews across the globe for thousands of years would find a new source in Jews themselves” (p.124). But Einstein seemed to modify his views after a trip to Tel Aviv in the 1920s.  The “accomplishments by the Jews in but a few years” in Tel Aviv, Einstein wrote, elicited his “highest admiration. A modern Hebrew city with busy economic and intellectual life shoots up from the bare ground. What an incredibly lively people our Jews are!” (p.137). Unlike many Zionists of the day, however, Einstein emphasized the importance of achieving parity between the Arabs and Jews living in Palestine.

       Einstein’s first trip to the United States took place in 1921, where he traveled with Chaim Weizmann, the famed Zionist leader who later became the first President of the State of Israel.  Unbeknownst to Einstein, Weizmann was using Einstein not only to raise money for the Zionist cause but also to ward off a challenge from American Supreme Court justice Louis Brandeis for leadership in the worldwide Zionist movement. Einstein’s trip to America failed to raise anywhere near the amount of money that Weizmann had hoped, but an “unintended result” of the trip was to “strengthen Einstein’s identity as a Jew” (p.130). Einstein wrote that it was in America that he “first discovered the Jewish people. . . [coming] from Russia, Poland, and Eastern Europe generally. . . I found these people extraordinarily ready for self-sacrifice and practically creative” (p.130).

      Einstein visited the United States frequently during the Weimar years and took part time positions at the California Institute of Technology, in Pasadena, in the early 1930s. Teaching at Cal Tech when Hitler came to power in 1933, Einstein chose to remain in the United States. In 1935, he obtained a research position at the Institute for Advanced Study in Princeton, New Jersey, where he remained until his death in 1955.

* * *

    Einstein’s years at Princeton are treated cursorily in this short volume, almost as an epilogue.  Gimbal discusses how Einstein’s concern that the Germans might develop an atomic bomb prompted him to co-sign a letter to President Roosevelt, urging Roosevelt to pre-empt the German effort. This led to the Manhattan Project, in which Einstein was not directly involved. Horrified by the actual use of nuclear weaponry in Japan in 1945, Einstein came to regret his limited role in unleashing this awesome force. Supposedly, he remarked, “I could burn my fingers that I wrote that letter to Roosevelt” (p172), although this quotation has not been verified. Gimbal also notes that Einstein became an ardent supporter of civil rights, seeing similarities between the treatment of African Americans in the United States and Jews in Europe. His support for civil rights prompted J. Edgar Hoover’s FBI to open a file on him.

* * *

     Einstein’s last years at Princeton were spent writing and speaking for pacifistic causes, working to help Jewish refugees flee Europe, and continuing to work on a grand unified theory of the universe.  On his deathbed, Einstein uttered a single sentence in German, his native tongue, before he passed away.  An American nurse heard his words but could not understand them.  “In death  as in life,” Gimbel concludes, “Albert Einstein left us a mystery” (p.177).


Thomas H. Peebles

La Châtaigneraie, France

September 5, 2016


Filed under Biography, Religion, Science

Catapulting Islam Into the 21st Century



Ayaan Hirsi Ali, Heretic:
Why Islam Needs a Reformation Now 

     Ayaan Hirsi Ali became known internationally and acquired celebrity status through her best-selling memoir Infidel, in which she told the spellbinding story of her journey away from the Islamic faith (I reviewed Infidel here in May 2012).  Hirsi Ali was born in 1969 in Somalia and lived in several different places growing up, including Saudi Arabia, Ethiopia and Kenya. Rather than acquiesce in a marriage that her family had arranged for her, Hirsi Ali fled to the West, winding up in the Netherlands. She became a political activist there, winning a seat in the Dutch Parliament as a visible and vocal critique of many Islamic practices, particularly those affecting girls and women. But she was also critical of Dutch authorities and their overly tolerant, ineffectual reaction to such practices as female genital mutilation and “honor killings” of girls and young women who bring “shame” upon their families.  Hirsi Ali became a friend of the Dutch filmmaker Theo Van Gogh (a descendant of the painter), who was brutally killed in Amsterdam, ostensibly because of the criticisms of Islam contained in a film he had produced.  After Van Gogh’s death, Hirsi Ali fled to the United States, where she now lives as a highly visible, outspoken (and heavily guarded) critic of present day Islam.

    Hirsi Ali’s most recent book, Heretic: Why Islam Needs a Reformation Now represents, she indicates, a “continuation of the personal and intellectual journey” she chronicled in Infidel and her other books (p.54). Here, Hirsi Ali addresses head-on the primary reason she has become a controversial figure: she firmly rejects the conventional liberal view of “jihad,” the wanton and barbaric violence practiced by professed Muslims. In the conventional view, jihad is a grotesque distortion of Islam, the work of a small number of fanatics who have “hijacked” a peaceful faith.

      Not so, Hirsi Ali counters.  Citing chapters and verses of the Qur’an, she contends that violence toward “infidels,” both non-Muslims and non-conforming Muslims, is an integral, inseparable component of a complex faith that counts over a billion followers across the globe. Jihad in the twenty-first century is “not a problem of poverty, insufficient education, or another other social precondition. . . we must move beyond such facile explanations. The imperative for jihad is embedded in Islam itself. It is a religious obligation” (p.176). Far from being un-Islamic, the central tenets of the jihadists are “supported by centuries-old Islamic doctrine” (p.205).

      Hirsi Ali is thus not one to avoid the term “Islamic terrorism.” It is no longer plausible, she contends, to argue that organizations such as Boko Haram and the Islamic state, ISIS, have “nothing to do with Islam. It is no longer credible to define ‘extremism’ as some disembodied threat, meting out death without any ideological foundation, a problem to be dealt with by purely military methods, preferably drone strikes. We need to tackle the root of the problem of the violence that is plaguing our world today, and that must be the doctrine of Islam itself” (p.190).

      The sanctioning of violence against infidels is in Hirsi Ali’s view only the most visible manifestation of Islam’s incompatibilities with the “key imperatives of modernity: freedom of conscience, tolerance of difference, equality of the sexes, and an investment in life before death” (p.51). Islamic thought rejects these hallmarks of democratically liberal and economically advanced societies, Hirsi Ali argues.  Islam therefore needs a reformation now, not unlike that which Christianity experienced in the 16th century. I would prefer the term “Enlightenment,” referring to the new modes of thinking that emerged in the 18th century. At one point, Hirsi Ali cites two figures associated with the Enlightenment, arguing that Islam “needs a Voltaire” and also has a “dire need” for a John Locke and his “powerful case for religious toleration” (p.209).

      But the terminology is not consequential. What Hirsi Ali advocates is that Islam and the Islamic world modernize. And, surprisingly, Hirsi Ali does not despair: in her view, a genuine Islamic reformation is not as far-fetched and fanciful as one might expect.

* * *

      Hirsi Ali characterizes Islam as, paradoxically, the “most decentralized and yet, at the same time, the most rigid religion in the world. Everyone feels entitled to rule out free discussion” (p.66). Islam has no counterpart to the hierarchal structures of the Catholic Church, starting with the pope and the College of Cardinals. Unlike Christianity and Judaism, the “tribal military and patriarchal values of [Islam’s] origins were enshrined as spiritual values, to be emulated in perpetuity . . . These values pertain especially to honor, male guardianship of women, harshness in war, and the death penalty for leaving Islam” (p.85).

      Islam in Hirsi Ali’s view upends the core Western view that individuals should, within certain limits, decide for themselves how to live and what to believe.  Islam has “very clear and restrictive rules about how one should live and it expects all Muslims to enforce those rules” (p.162). The “comprehensive nature of commanding right and forbidding wrong is uniquely Islamic,” she argues. Because Islam does not confine itself to a separate religious sphere, it is “deeply embedded in political, economic and personal as well as religious life” (p.156). Islam is a “political religion many of whose fundamental tenets are irreconcilably inimical to our way of life” (p.213).

    Hirisi-Ali’s analysis discounts the traditional division of Islam into Sunni and Shiite sects. This division is important to understand geo-political realities and the sectarian violence in today’s Middle East, particularly in Iraq and Syria, along with the growing regional rivalry between Shiite Iran and Sunni Saudi Arabia.  But the division does not help in understanding Hirsi Ali’s point that jihad-like violence toward “infidels,” including non-conforming Muslims, is embedded into and is an integral part of both Shiite and Sunni Islam.

       The more salient distinction is between what Hirsi Ali terms “Medina” and “Mecca” Muslims. Medina was the city where the Prophet Muhammad and his small band of 7th century followers gave a more militant cast to their faith, forcing polytheist non-believers – “infidels” — either to convert to Islam or die (Jews and Christians could retain their faith if they paid a special tax).  Medina Muslims aim to emulate the Prophet Muhammad’s warlike conduct after his move to Medina. They are more rigid and tribal than Mecca Muslims, seeking the forcible imposition of Islamic law, sharia, as their religious duty.  Although not all Media Muslims are violence-prone jihadists, jihad fits comfortably into their worldview. Even if Medina Muslims do not themselves engage in violence, “they do not hesitate to condone it . . . Medina Muslims believe that the murder of an infidel is an imperative if he refuses to convert voluntarily to Islam” (p.15). For Medina Muslims, other faiths and other interpretations of Islam are “simply not valid” (p.40).

      The good news is that Medina Muslims are a minority within the Islamic world. Mecca Muslims, the clear majority, are “loyal to the core creed and worship devoutly, but are not inclined to practice violence” (p.16). But the bad news is that Mecca Muslims are “too passive, indolent, and – crucially – lacking in the intellectual vigor to stand up to the Medina Muslims” (p.49). Winning their support for the reformation which Hirsi Ali envisions will be crucial but far from easy.

      Moreover, reform is “simply not a legitimate concept in Islamic doctrine,” Hirsi Ali argues. The “only accepted and proper goal of a Muslim ‘reformer’ is a return to first principles” (p.64).  Reform in the Islamic world has been narrowly focused on such questions as whether a Muslim could pray on an airplane, a technological innovation unknown to the Prophet Muhammad. But the “larger idea of ‘reform,’ in the sense of fundamentally calling into question central tenets of Islamic doctrine, has been conspicuous by its absence.  Islam even has its own pejorative term for theological troublemakers: ‘those who indulge in innovations and follow their passions’” (p.212-13).

    Hirsi Ali’s case for an Islamic reformation revolves around five central tenets of Islam that she considers incompatible with modernity and need to be modified if not abolished as part of the reformation she advocates.  She sometimes refers to her recommendations on these five tenets as “theses,” in reference to the 95 theses that Martin Luther nailed to the Wittenberg church door in 1517, when he provided his indictment against the Catholic Church. But more often she terms her recommendations simply “amendments.”

* * *

      Hirsi Ali’s five amendments are:

1. Ensure that the life of the Prophet Muhammad and the Qur’an are open to interpretation and criticism — The “crucial first step” in the process of modification and reform of Islam will be to “acknowledge the humanity of the Prophet himself and the role of human beings in creating Islam’s sacred texts”(p.105).

2. Give priority to this life, not the afterlife — Islam’s “afterlife fixation” erodes the “intellectual and moral incentives that are essential for ‘making it’ in the modern world” (p.124); until Islam stops fixating on the afterlife, Muslims “cannot get on with the business of living in this world” (p.127).

3. Shackle sharia and end its supremacy over secular law — “What separated Muslims from the infidels . . . was the God-given nature of their laws. And because these laws came ultimately from Muhammad’s divine revelations, they were fixed and could not be changed. Thus the law code dating from the seventh century continues to be followed today in nations and regions that adhere to sharia” (p.133-34).

4. End the practice of empowering individuals to enforce Islamic law — Unlike the totalitarian regimes of the twentieth century, which had to work hard to persuade family members to denounce one another to the authorities, the “power of the Muslim system is that that the authorities do not need to be involved. Social control begins at home” (p.154); consequently, “every small act, every minor infraction has the potential to become a major religious crime” (p.165).

5. Abandon the call to jihad — The concept of jihad should be “decommissioned” (p.205); clerics, imams, scholars and national leaders around the world need to declare jihad “haram,” forbidden (p.206).

     Hirsi Ali contends that these amendments can take place “without causing the entire structure [of the Islamic faith] to collapse” (p.73). Her amendments will “actually strengthen Islam by making it easier for Muslims to live in harmony with the modern world” (p.73).  She acknowledges that medieval Christianity knew practices similar to those targeted in all but her 4th amendment (the practice of empowering individuals to enforce Islamic law has no analogue in hierarchical medieval Catholicism).  Reform-minded Islamic experts might quibble about some of Hirsi Ali’s wording and emphasis. I found it surprising that altering Islam’s view of women does not merit a separate amendment. Improvement in the status of women in Hirsi Ali’s analysis is rather an outgrowth of her 3rd amendment, shackling sharia: “there is no more obvious incompatibility between Islam and modernity than the subordinate role assigned to women in sharia law” (p.225).

     Hirsi Ali is far from the first to call for an Islamic reformation.  She nonetheless convinced me that reform of Islamic doctrine and the Islamic worldview along the lines of her five amendments would go far to render Islam a more tolerant religion, capable of coexisting with the world’s other faiths.  But  how does Islam catapult from the 16th century into to the 21st? Hirsi Ali’s response is vague, underscoring that her book is more polemical than practical — it is not a roadmap to the Islamic reformation.

* * *

      Realization of her five amendments will be “exceedingly difficult” (p.73), Hirsi Ali acknowledges. The struggle for the reformation of Islam is a “war of ideas” which cannot be fought “solely by military means” (p.220).  It must be led by a relatively small number of “dissidents” and “modifying Muslims” within the Muslim world who reject the Medina Muslims’ efforts to return to the time of the Prophet Muhammad. The prize over which the dissidents and the Medina Muslims fight is the “hearts and minds of the largely passive Mecca Muslims” (p.223). The availability of new information technology is critical in empowering those who seek to oppose the Medina Muslims.

      The Western world should “provide assistance and, where necessary, security to those dissidents and reformers who are carrying out [the] formidable task” of seeking to reform Islam from within Muslim majority countries (p.250), Hirsi Ali writes. They should be defended and supported in the West in a manner analogous to the way the West defended and supported Soviet dissidents during the Cold War.  Such dissidents are “ultimately allies of human freedom though they may differ with Westerners on matters of public policy” and are “unlikely to agree with Westerners on every matter of foreign policy” (p.249).

     But the heart of Hirsi Ali’s message is that Westerners must change the way they think about Islam. We must:

no longer accept limitations on criticism of Islam. We must reject the notion that only Muslims can speak about Islam, and that any critical examination of Islam is inherently ’racist’. . . Multiculturalism should not mean that we tolerate another culture’s intolerance. If we do in fact support diversity, women’s rights, and gay rights, then we cannot in good conscience give Islam a free pass on the grounds of multicultural sensitivity (p.27-28).

In Western countries, she argues at several points, Muslims “must accommodate themselves to Western liberal ideals” (p.213), rather than the other way around.

* * *

      In addition to its vagueness on how to bring about the Islamic reformation in Muslim majority countries, two additional shortcomings undermine the cogency of Hirsi Ali’s otherwise trenchant critique.  Hirsi Ali has a full section devoted to what she terms “Christophobia,” an antipathy toward Christianity which she says pervades Islamic countries across the globe and dwarfs what we often term “Islamophobia,” discrimination in the West against individuals because of their Muslim backgrounds and unequal treatment of Muslim religious institutions. She discounts Islamophobia as overstated and overblown by journalists.  But in a book targeting Westerners it is myopic to dismiss Islamophobia as inconsequential.  Anyone following current presidential elections in the United States or immigration issues in Europe knows that the phenomena of Islamophobia needs to be treated as a serious concern in Western societies. Hirsi Ali misses an opportunity to provide Westerners with her guidance about how they might work out the tension between acknowledging the often-illiberal substantive content of Islamic beliefs and practices without encouraging or succumbing to anti-Islamic hysteria, Islamophobia.  Hirsi Ali has more stature than just about anyone I can think of to provide such guidance.  That might be a worthwhile subject of her next book.

      Finally, at the end of her analysis, Hirsi Ali argues that Christianity and Judaism underwent a process of “repeated blasphemy” to evolve and grow into modernity (p.233-34). Those who wanted to uphold the status quo in Christianity and Judaism made the same arguments as those of present-day Muslims: that “they were offended, that the new thinking was blasphemy” (p.233).  The idea of blasphemy as an instrument of Islamic reform is an interesting one, but it appears only as an afterthought at the  end of Hirsi Ali’s book. The idea might have had serious clout if she had given it more prominence in the book and shown how it relates to her other arguments for reform. This too might be a worthwhile subject of another provocative Hirsi Ali book.

Thomas H. Peebles
Silver Spring, Maryland
August 9, 2016


Filed under Religion

Changing the Definition of Literature in the Eyes of the Law



Kevin Birmingham, The Most Dangerous Book:
The Battle for James Joyce’s Ulysses

      James Joyce’s enigmatic masterpiece novel Ulysses was first published in book form in France in 1922. Portions of the novel had by then already appeared as magazine excerpts in the United States and Great Britain. The previous year, a court in the United States had declared several such excerpts obscene, and British authorities  followed suit in 1923. In The Most Dangerous Book: The Battle for James Joyce’s Ulysses, Kevin Birmingham describes the furor which the novel provoked and the scheming that was required to bring the novel to readers.

     Birmingham, a lecturer in history and literature at Harvard, characterizes his work as the “biography of a book” (p.2). Its core is the twofold story of the many benefactors who aided Joyce in maneuvering around publication obstacles; and of the evolution of legal standards for judging literature claimed to be obscene. Birmingham also provides much insight into Joyce the author, his view of art, and the World War I era literary world in which he operated. The book, Birmingham’s first, further serves as a useful introduction to Ulysses itself for those readers, myself emphatically included, who have not yet garnered the courage to tackle Joyce’s masterpiece.

     Ulysses depicted a single day in Dublin, June 16, 1904. On the surface, the novel follows three central characters, Stephen Daedalus, Leopold Bloom, and his wife Molly Bloom. But Ulysses is also a retelling of Homer’s Odyssey, with the three main characters serving as modern versions of Telemachus, Ulysses, and Penelope. Peering into the 20th century through what Birmingham terms the “cracked looking glass of antiquity” (p.54), Joyce sought to capture both the erotic pleasures and intense pains of the human body; fornication and masturbation, defecation and disease were all part of the human experience that Joyce sought to convey. He even termed his work an “epic of the human body” (p.14).

     Treating sexuality in a more forthright manner than what public authorities in the United States and Great Britain were willing to countenance — sex at the time “just wasn’t something a legitimate novelist portrayed” (p.64) — Ulysses was deemed a threat to public morality, and was subject to censorship, confiscation and book burning spectacles. But the charges levied against Ulysses were about “more than the right to publish sexually explicit material” (p.6), Birmingham contends. They also involved a clash between two rising forces, modern print culture and modern governmental regulatory power, and were thus part of a larger struggle between state authority and individual freedom that intensified in the early twentieth century, “when more people began to challenge governmental control over whatever speech the state considered harmful” (p.6).

     There is a meandering quality to much of Birmingham’s narrative, which shifts back and forth between Joyce himself, his literary friends and supporters, and those who challenged Ulysses in the name of public morality. At times, it is difficult to tie these threads together. But the book regains its footing in a final section describing the definitive trial and landmark 1934 judicial ruling, the case of United States vs. One Book Called Ulysses, which held that the novel was not obscene. The decision constituted the last significant hurdle for Joyce’s book, after which it circulated freely to readers in the United States and elsewhere.  In his section on this case, Birmingham’s central point comes into full focus:  Ulysses changed not only the course of literature but also the “very definition of literature in the eyes of the law” (p.2).

* * *

     James Joyce was born in Dublin in 1882, educated at Catholic schools and University College, Dublin. As a boy, Joyce and his family moved so frequently within Dublin that Joyce could plausibly claim to know almost all the city’s neighborhoods.  But Joyce spent little of his professional career in Dublin. Sometime in 1903 or 1904, Joyce met and fell in love with Nora Barnacle, a chambermaid from rural Galway then working in a Dublin hotel. Barnacle followed Joyce across Europe, bore their children, inspired his literary talent, and eventually became his wife. Joyce and Barnacle lived for several years in the Italian port city of Trieste, then in Zurich and Rome. But the two are best known for their time in Paris, where Joyce became one of the most renowned expatriate writers of the so-called Lost Generation. In 1914, Joyce published his first book, Dubliners, a collection of 15 short stories. Two years later, he completed his first novel, Portrait of the Artist as a Young Man. While not a major commercial success, the book caught the attention of the American poet, Ezra Pound, then living in London. During this time, Joyce also began writing Ulysses.

      The single day depicted in the novel, June 16, 1904, was the day that Joyce and Barnacle first met. Although there may have been single-day novels before Ulysses, “no one thought of a day as an epic. Joyce was planning to turn a single day into a recursive unit of dazzling complexity in which the circadian part was simultaneously the epochal whole. A June day in Dublin would be a fractal of Western civilization” (p.55). The idea of Homeric correspondences and embedding references to the Odyssey into early 20th century Dublin may seem “indulgent,” Birmingham writes, yet Joyce executed it “so subtly that the novel can become a scavenger hunt for pedants . . . Some allusions are so obscure that their pleasure seems to reside in their remaining hidden” (p.130-31).

     In the early 20th century, censors sought to ban obscene works in part to protect the sensibilities of women and children, especially in large urban centers like London and New York. It is thus ironic that strong and forward- minded women are central to Birmingham’s story, standing behind Joyce and assuming the considerable risks which the effort to publish Ulysses entailed. The first two, Americans Margaret Anderson and Jane Heap, were co-publishers of an avant-garde magazine, The Little Review, an “unlikely product of Wall Street money and Greenwich Village bohemia” (p.7-8), and one of several small, “do-it-yourself” magazines which Birmingham describes as “outposts of modernism” (p.71). From London, Erza Pound linked Joyce to Anderson and Heap, and The Little Review began to publish Ulysses in 1918 in serial form.

      In 1921, New York postal authorities sought to confiscate portions of Ulysses published in The Little Review under the authority of the Comstock Act, an 1873 statute that made it a crime, punishable by up to ten years in prison and a $10,000 fine, to utilize the United States mail to distribute or advertise obscene, lewd or lascivious materials. The Comstock Act adopted the “Hicklin rule” for determining obscenity, a definition from an 1868 English case, Regina v. Hicklin: “whether the tendency of the matter charged as obscenity is to deprave and corrupt those whose minds are open to such immoral influences and into whose hands a publication of this sort may fall” (p.168).

     The Hicklin rule’s emphasis upon “tendency” to deprave and corrupt defined obscenity by a work’s potential effects on “society’s most susceptible readers – anyone with a mind ‘open’ to ‘immoral influences.’ . . . Lecherous readers and excitable teenage daughters could deprave and corrupt the most sophisticated literary intent” (p.168). The Hicklin rule further permitted judges to look at individual words or passages without considering their place in the work as a whole and without considering the work’s artistic or literary value. Finding that portions of Ulysses under review were obscene under the Hicklin rule, a New York court sentenced Anderson and Heap to 10 days in prison or $100 fines. The Post Office sent seized copies of The Little Review to the Salvation Army, “where fallen women in reform programs were instructed to tear them apart” (p.197). The court’s decision served as a ban on publication and distribution of Ulysses in the United States for another 10 years.

     The court’s decision also highlighted the paradoxical role of the Post Office in the early 20th century. Although the postal service “made it possible for avant-garde texts to circulate cheaply and openly to wherever their kindred readers lived,” it was also the institution that could “inspect, seize and burn those texts” (p.7). Moreover, government suppression of sexually explicit material in the United States during and immediately after World War I shaded into its efforts to stamp out political radicalism. Ulysses encountered obstacles to publication in the United States not so much because “vigilantes were searching for pornography but because government censors in the Post Office were searching for foreign spies, radicals and anarchists, and it made no difference if they were political or philosophical or if they considered themselves artists” (p.109).

     Meanwhile, in Great Britain, Harriet Shaw Weaver, a “prim London spinster” (p.12) published Ulysses in serial form in a similarly obscure London publication, The Egoist, also supported by Erza Pound. After Leonard and Virginia Woolf refused to publish Ulysses in Britain, Weaver imported a full version of the novel from France. In 1923, Sir Archibald Bodkin, head of the Crown Prosecution Service, concluded that Ulysses was “filthy” and that “filthy books are not allowed to be imported into this country” (p.253; Bodkin also vigorously prosecuted war resisters during World War I, as discussed in Adam Hochschild’s To End All Wars: A Story of Loyalty and Rebellion, reviewed here in November 2014). Sir Archibald’s ruling authorized British authorities to seize and burn in the “King’s Chimney” 500 copies of Ulysses coming from France.

      The copies subject to Bodkin’s ruling had been printed at the behest of Sylvia Beach, the American expatriate who founded the iconic Parisian bookstore Shakespeare & Company, a “hybrid space, something between an open café and an ensconced literary salon” (p.150), and a home away from home for Joyce, the young Ernest Hemmingway, and other members of the Lost Generation of expatriate writers. After Beach became the first to publish Ulysses in book form in 1922, she went on to publish eight editions of the novel and Shakespeare & Company “became a pilgrimage destination for budding Joyceans, several of whom asked Miss Beach if they could move to Paris and work for her” (p.260).

     Over the next decade, Joyce’s novel became an “underground sensation” (p.3), banned implicitly in the United States and explicitly in Great Britain. Editions of Ulysses were smuggled from France into the United States, often through Canada. The book was “literary contraband, a novel you could read only if you found a copy counterfeited by literary pirates or if you smuggled it past customs agents” (p.3). Throughout the decade, Joyce’s health deteriorated appreciably. He had multiple eye problems and, despite numerous ocular surgeries – described in jarringly gruesome detail here — he lost his sight. He also contracted syphilis. By the mid-1920s, Birmingham writes, Joyce was “already an old man. The ashplant cane that he had used for swagger as a young bachelor in Dublin became a blind man’s cane in Paris. Strangers helped him cross the street, and he bumped into furniture as he navigated through his own apartment” (p.289).

* * *

     In 1932, Beach relinquished her claims for royalties from Ulysses.  The upcoming New York publishing firm, Random House, under its ambitious young owner Bennett Cerf, then signed a contract with Joyce for publication and distribution rights in the United States, even though the 1921 court decision still served as a ban on distribution of the novel. To formulate a test case, Random House’s attorney, Morris Ernst, a co-founder of the American Civil Liberties Union, almost begged Customs inspectors to confiscate a copy of Ulysses. Initially, an inspector responded that “everybody brings that [Ulysses] in. We don’t pay attention to it” (p.306).  But the book was seized and, some seven months later, the United States Attorney in New York brought a case for forfeiture and confiscation under a statute that allowed an action against the book itself, rather than its publishers or importers. The United States Attorney instituted the test case in the fall of 1933, a few short months after the first book burnings in Nazi Germany.

     The case was assigned to Judge John Woolsey, a direct descendant of the 18th century theologian Jonathan Edwards. Ernst sought to convince Judge Woolsey that the first amendment to the United States Constitution should serve to protect artistic as well as political expression and that the Hicklin rule should be discarded. Under Ernst’s argument, Ulysses merited first amendment protection as a serious literary work, “’too precious’ to be sacrificed to unsophisticated readers” (p.320). Ernst went on to contend that obscenity was a “living standard.” Even if Ulysses had been obscene at the time The Little Review excerpts had been condemned a decade earlier, it could still be protected expression in 1933, given the vast changes in public morality standards since The Little Review ruling.

     Unlike the judges who had considered The Little Review excerpts, Judge Woolsey  took the time to read the novel and ended up agreeing with Ernst. He found portions of the book “disgusting” with “many words usually considered dirty.” But he found nothing that amounted to “dirt for dirt’s sake” (p.329). Rather, each word of the book:

contributes like a bit of mosaic to the detail of the picture which Joyce is seeking to construct for his readers. . . when such a great artist in words, as Joyce undoubtedly is, seeks to draw a true picture of the lower middle class in a European city, ought it to be impossible for the American public legally to see that picture? (p.329).

Answering his question in the negative, Judge Woolsey ruled that Joyce’s novel was not obscene and could be admitted into the United States.

     A three-judge panel of the Second Circuit Court of Appeals affirmed Judge’s Woolsey’s decision, 2-1. The majority consisted of two of the most renowned jurists of the era, Learned Hand, who had been pushing for a more modern definition of obscenity for years; and his cousin, Augustus Hand, who wrote the majority opinion.  Once the appeals court issued its decision, Cerf inserted Judge Woolsey’s decision into the Random House printings of the novel, making it arguably the most widely distributed judicial opinion in history.  Two years later, the trial and appellate court decisions in the United States influenced Britain to abandon the 1868 Hicklin rule. Obscenity in Britain would no longer be a matter of identifying a book’s tendency to deprave and corrupt. Rather, the government must “consider intent and context – the character of a book was all contingent” (p.336).

     United States vs. One Book Called Ulysses established a test for determining whether a work is obscene and thus outside the protection of the first amendment, that, in somewhat modified form, still applies today in the United States.  This test requires a court to consider: (1) the literary worth of the work as a whole, not just selected excerpts; (2) the effect on an average reader, rather than an overly sensitive one; and (3) evolving contemporary community standards.  The decision, Birmingham argues, removed “all barriers to art” and led to “unfettered freedom of artistic form, style and content – literary freedoms that were as political as any speech protected by the First Amendment” (p.11).

* * *

     It is an open question whether Birmingham’s book will inspire readers who have not yet read Joyce’s masterwork to do so. But even those reluctant to undertake Joyce’s work should appreciate Birmingham’s account of how forward-minded early 20th century publishers and members of the literary world schemed to bring Ulysses to the light of day; and how judicial standards evolved to allow room for literary works treating human sexuality candidly and openly.

Thomas H. Peebles
Silver Spring, Maryland
July 29, 2016


Filed under American Society, History, Literature

Turning the Ship of Ideas in a Different Direction



Tony Judt, When the Facts Change,

Essays 1995-2010 , edited by Jennifer Homans

      In a 2013 review of Rethinking the 20th Century, I explained how the late Tony Judt became my “main man.” He was an expert in the very areas of my greatest, albeit amateurish, interest: French and European 20th century history and political theory; what to make of Communism, Nazism and Fascism; and, later in his career, the contributions of Central and Eastern European thinkers to our understanding of Europe and what he often termed the “murderous” 20th century. Moreover, Judt was a contemporary, born in Great Britain in 1948, the son of Jewish refugees. Raised in South London and educated at Kings College, Cambridge, Judt spent time as a recently-minted Cambridge graduate at Paris’ fabled Ecole Normale Supérieure; he lived on a kibbutz in Israel and contributed to the cause in the 1967 Six Day War; and had what he termed a mid-life crisis, which he spent in Prague, learning the Czech language and absorbing the rich Czech intellectual and cultural heritage.  Judt also had several teaching stints in the United States and became an American citizen. In 1995, he founded the Remarque Institute at New York University, where he remained until he died in 2010, age 62, of amyotrophic lateral sclerosis, ALS, which Americans know as “Lou Gehrig’s Disease.”

      Rethinking the 20th Century was more of an informal conversation with Yale historian Timothy Snyder than a book written by Judt. Judt’s best-known work was a magisterial history of post-World War II Europe, entitled simply Post War. His other published writings included incisive studies of obscure left-wing French political theorists and the “public intellectuals” who animated France’s always lively 20th century debate about the role of the individual and the state (key subjects of Sudhir Hazareesingh’s How the French Think: An Affectionate Portrait of an Intellectual People, reviewed here in June).  Among French public intellectuals, Judt reserved particular affection for Albert Camus and particular scorn for Jean-Paul Sartre.  While at the Remarque Institute, Judt became himself the epitome of a public intellectual, gaining much attention outside academic circles for his commentaries on contemporary events.  Judt’s contributions to public debate are on full display in When the Facts Change, Essays 1995-2010, a collection of 28 essays edited by Judt’s wife Jennifer Homans, former dance critic for The New Republic.

      The collection includes book reviews and articles originally published elsewhere, especially in The New York Review of Books, along with a single previously unpublished entry. The title refers to a quotation which Homans considers likely apocryphal, attributed to John Maynard Keynes: “when the facts change, I change my mind – what do you do, sir” (p.4). In Judt’s case, the major changes of mind occurred early in his professional life, when he repudiated his youthful infatuation with Marxism and Zionism. But throughout his adult life and especially in his last fifteen years, Homans indicates, as facts changed and events unfolded, Judt “found himself turned increasingly and unhappily against the current, fighting with all of his intellectual might to turn the ship of ideas, however slightly, in a different direction” (p.1).  While wide-ranging in subject-matter, the collection’s entries bring into particularly sharp focus Judt’s outspoken opposition to the 2003 American invasion of Iraq, his harsh criticism of Israeli policies toward its Palestinian population, and his often-eloquent support for European continental social democracy.

* * *

      The first essay in the collection, a 1995 review of Eric Hobsbawm’s The Age of Extremes: A History of the World, 1914-1991, should be of special interest to tomsbooks readers. Last fall, I reviewed Fractured Times: Culture and Society in the Twentieth Century, a collection of Hobsbawm’s essays.  Judt noted that Hobsbawm had “irrevocably shaped” all who took up the study of history between 1959 and 1975 — what Judt termed the “Hobsbawm generation” of historians (p.13). But Judt contended that Hobsbawm’s relationship to the Soviet Union — he was a lifelong member of Britain’s Communist Party – clouded his analysis of 20th century Europe. The “desire to find at least some residual meaning in the whole Communist experience” explains what Judt found to be a “rather flat quality to Hobsbawm’s account of the Stalinist terror” (p.26). That the Soviet Union “purported to stand for a good cause, indeed the only worthwhile cause,” Judt concluded, is what “mitigated its crimes for many in Hobsbawm’s generation.” Others – likely speaking for himself — “might say it just made them worse” (p.26-27).

      In the first decade of the 21st century, Judt became known as an early and fervently outspoken critic of the 2003 American intervention in Iraq.  Judt wrote in the New York Review of Books in May 2003, two months after the U.S.-led invasion, that President Bush and his advisers had “[u]nbelievably” managed to “make America seem the greatest threat to international stability.” A mere eighteen months after September 11, 2001:

the United States may have gambled away the confidence of the world. By staking a monopoly claim on Western values and their defense, the United States has prompted other Westerners to reflect on what divides them from America. By enthusiastically asserting its right to reconfigure the Muslim world, Washington has reminded Europeans in particular of the growing Muslim presence in their own cultures and its political implications. In short, the United States has given a lot of people occasion to rethink their relationship with it” (p.231).

Using Madeline Albright’s formulation, Judt asked whether the world’s “indispensable nation” had miscalculated and overreached. “Almost certainly” was his response to his question, to which he added: “When the earthquake abates, the tectonic plates of international politics will have shifted forever” (p.232). Thirteen years later, in the age of ISIS, Iranian ascendancy and interminable civil wars in Iraq and Syria, Judt’s May 2003 prognostication strikes me as frightfully accurate.

      Judt’s essays dealing with the state of Israel and the seemingly intractable Israeli-Palestinian conflict generated rage, drawing in particular the wrath of pro-Israeli American lobbying groups. Judt, who contributed to Israeli’s war effort in the 1967 Six Day War as a driver and translator for the Iraqi military, came to consider the state of Israel an anachronism. The idea of a Jewish state, in which “Jews and the Jewish religion have exclusive privileges from which non-Jewish citizens are forever excluded,” he wrote in 2003, is “rooted in another time and place” (p.116). Although “multi-cultural in all but name,” Israel was “distinctive among democratic states in its resort to ethno-religious criteria with which to denominate and rank its citizens” (p.121).

      Judt noted in 2009 that the Israel of Benjamin Netanyahu was “certainly less hypocritical than that of the old Labor governments. Unlike most of its predecessors reaching back to 1967, it does not even pretend to seek reconciliation with the Arabs over which it rules” (p. 157-58). Israel’s “abusive treatment of the Palestinians,” he warned, is the “chief proximate cause of the resurgence of anti-Semitism worldwide. It is the single most effective recruiting agent for radical Islamic movements” (p.167). Vilified for these contentions, Judt repeatedly pleaded for recognition of what should be, but unfortunately is not, the self-evident proposition that one can criticize Israeli policies without being anti-Semitic or even anti-Israel.

      Judt was arguably the most influential American proponent of European social democracy, the form of governance that flourished in Western Europe between roughly 1950 and 1980 and became the model for Eastern European states emerging from communism after 1989, with a strong social safety net, free but heavily regulated markets, and strong respect for individual liberties and the rule of law. Judt characterized social democracy as the “prose of contemporary European politics” (p.331). With the fall of communism and the demise of an authoritarian Left, the emphasis upon democracy had become “largely redundant,” Judt contended. “We are all democrats today. But ‘social’ still means something – arguably more now than some decades back when a role for the public sector was uncontentiously conceded by all sides” (p.332). Judt saw social democracy as the counterpoint to what he termed “neo-liberalism” or globalization, characterized by the rise of income inequality, the cult of privatization, and the tendency – most pronounced in the Anglo-American world – to regard unfettered free markets as the key to widespread prosperity.

      Judt asked 21st century policy makers to take what he termed a “second glance” at how “our twentieth century predecessors responded to the political challenge of economic uncertainty” (p.315). In a 2007 review of Robert Reich’s Supercapitalism: The Transformation of Business, Democracy, and Everyday Life, Judt argued that the universal provision of social services and some restriction upon inequalities of income and wealth are “important economic variables in themselves, furnishing the necessary public cohesion and political confidence for a sustained prosperity – and that only the state has the resources and the authority to provide those services and enforce those restrictions in our collective name” (p.315).  A second glance would also reveal that a healthy democracy, “far from being threatened by the regulatory state, actually depends upon it: that in a world increasingly polarized between insecure individuals and unregulated global forces, the legitimate authority of the democratic state may be the best kind of intermediate institution we can devise” (p.315-16).

      Judt’s review of Reich’s book anticipated the anxieties that one sees in both Europe and America today. Fear of the type last seen in the 1920s and 1930s had remerged as an “active ingredient of political life in Western democracies” (p.314), Judt observed one year prior to the economic downturn of 2008.  Indeed, one can be forgiven for thinking that Judt had the convulsive phenomena of Brexit in Britain and Donald Trump in the United States in mind when he emphasized how fear had woven itself into the fabric of modern political life:

Fear of terrorism, of course, but also, and perhaps more insidiously, fear of uncontrollable speed of change, fear of the loss of employment, fear of losing ground to others in an increasingly unequal distribution of resources, fear of losing control of the circumstances and routines of one’s daily life.  And perhaps above all, fear that it is not just we who can no longer shape our lives but that those in authority have lost control as well, to forces beyond their reach.. . This is already happening in many countries: note the arising attraction of protectionism in American politics, the appeal of ‘anti-immigrant parties across Western Europe, the calls for ‘walls,’ ‘barriers,’ and ‘tests’ everywhere (p.314).

       Judt buttressed his case for social democracy with a tribute to the railroad as a symbol of 19th and 20th century modernity and social cohesion.  In essays that were intended to be part of a separate book, Judt contended that the railways “were and remain the necessary and natural accompaniment to the emergence of civil society. They are a collective project for individual benefit. They cannot exist without common accord . . . and by design they offer a practical benefit to individual and collectivity alike” (p.301). Although we “no longer see the modern world through the image of the train,” we nonetheless “continue to live in the world the trains made.”  The post-railway world of cars and planes, “turns out, like so much else about the decades 1950-1990, to have been a parenthesis: driven, in this case, by the illusion of perennially cheap fuel and the attendant cult of privatization. . . What was, for a while, old-fashioned has once again become very modern” (p.299).

      In a November 2001 essay appearing in The New York Review of Books, Judt offered a novel interpretation of Camus’ The Plague as an allegory for France in the aftermath of German occupation, a “firebell in the night of complacency and forgetting” (p.181).  Camus used The Plague to counter the “smug myth of heroism that had grown up in postwar France” (p.178), Judt argued.  The collection concludes with three Judt elegies to thinkers he revered, François Furet, Amos Elon, and Lesek Kołakowski, a French historian, an Israeli writer and a Polish communist dissident, representing key points along Judt’s own intellectual journey.


      The 28 essays which Homans has artfully pieced together showcase Judt’s prowess as an interpreter and advocate – as a public intellectual — informed by his wide-ranging academic and scholarly work.  They convey little of Judt’s personal side.  Readers seeking to know more about Judt the man may look to his The Memory Chalet, a memoir posthumously published in 2010. In this collection, they will find an opportunity to savor Judt’s incisive if often acerbic brilliance and appreciate how he brought his prodigious learning to bear upon key issues of his time.

Thomas H. Peebles
La Châtaigneraie, France
July 6, 2016


Filed under American Politics, European History, France, French History, History, Intellectual History, Politics, Uncategorized, United States History, World History

A Particular Sort of Friendship



Ben Macintyre, A Spy Among Friends: 

Kim Philby and the Great Betrayal 

     In the long history of espionage – sometimes described as the world’s second oldest profession – few chapters are as bizarre and as intriguing as that of the infamous “Cambridge Five”: Kim Philby, Donald Maclean, Guy Burgess, Anthony Blunt and John Cairncross, five well-bred upper class lads who studied at Cambridge University in the 1930s, then left the university to spy for the Soviet Union.  Among them, Philby might qualify as the most infamous. Even by the standards of spies, Philby’s duplicity and mendacity were breathtaking. The historical consensus is that, during his long career as a British-Soviet double agent, Philby provided more damaging information to the Soviets than any of his peers: details on British counterintelligence activities, the identities of British agents and operatives, the structure of Britain’s intelligence services, even information on his father and wife Aileen.  These betrayals led directly to the deaths of countless persons.

     But the betrayals that form the core of Ben Macintyre’s account of Philby and his milieu, A Spy Among Friends: Kim Philby and the Great Betrayal, involve Philby’s friendship with his protégé within the British intelligence service, Nicolas Elliot; and, to a lesser extent, with his American counterpart James Jesus Angleton. By focusing on Philby’s relationships with Elliot and Angleton, Macintyre seeks to capture what he describes as a “particular sort of friendship that played an important role in history.” His book, unlike others on Philby, is “less about politics, ideology, and accountability than personality [and] character” (p.xv). Macintyre, a writer-at-large for The Times of London, also casts much light on the insularity of upper class Britain’s ruling elite in the mid-20th century, a “family” where “mutual trust was so absolute and unquestioned that there was no need for elaborate security precautions” (p.88). Although not quite a “real-life-spy-thriller,” Macintyre’s compact and measured account is in its own way as riveting as the spy fiction of Ian Fleming, who appears briefly here as a Naval Intelligence Officer and confidante of Elliot; or of John Le Carré, the author of Tinker Tailor Soldier Spy, based in part on Philby’s story, who has written a short “Afterword” to Macintyre’s book.

* * *

     Harold Adrian Russell Philby — nicknamed “Kim” after the boy in Rudyard Kipling’s novel Kim — was born in India in 1912, the son of a well-known author and explorer who became a civil servant in India and later converted to Islam. Philby was educated in elite British private schools (paradoxically termed “public schools”) and at Cambridge’s prestigious Trinity College, where he also began espionage work for the Soviet Union. He launched his career with British intelligence during World War II. He served for a while as head of Britain’s primary counterintelligence unit, Section V of Britain’s Secret Intelligence Service, the M16, coordinating Britain’s anti-Soviet clandestine activity while simultaneously providing information to the Soviets. He led his double life in London and in foreign assignments in Istanbul, Washington, and Beirut. From Beirut, he defected to Moscow in 1963.

     The word most consistently used to describe Philby was “charm,” Macintyre writes, that “intoxicating, beguiling, and occasionally lethal English quality.”  Philby could “inspire and convey affection with such ease that few ever noticed they were being charmed. Male and female, old and young, rich and poor, Kim enveloped them all” (p.19). Like many intelligent and idealistic young men coming of age in the 1930s, Philby became a believer in the great Soviet experiment.  His beliefs were “radical but simple”: the rich had “exploited the poor for too long; the only bulwark against fascism was Soviet communism . . . capitalism was doomed and crumbling; the British establishment was poisoned by Nazi leanings” (p.37-38). There is no evidence that Philby “ever questioned the ideology he had discovered at Cambridge, changed his opinions, or seriously acknowledged the iniquities of practical communism,” Macintyre argues. Moreover, Philby “never shared or discussed his views, either with friend or foe. Instead, he retained and sustained his faith, without the need for priests or fellow believers, in perfect isolation.  Philby regarded himself as an ideologue and a loyalist; in truth, he was a dogmatist, valuing only one opinion, his own” (p.215).

     But Macintyre’s story revolves around Nicholas Elliot almost as much as Philby.  Born five years after Philby in 1917, Elliot was the son of the Headmaster at Eton, one of Britain’s most prestigious public schools.  Elliot and Philby were “two men of almost identical tastes and upbringing” (p.2), as close as “two heterosexual, upper-class midcentury Englishmen could be” (p.249). The two men:

learned the spy trade together during the Second World War. When that war was over, they rose together through the ranks of British intelligence, sharing every secret. The belonged to the same clubs, drank in the same bars, wore the same well-tailored clothes, and married women of their own tribe. But all that time, Philby had one secret he never shared: he was covertly working for Moscow, taking everything he was told by Elliot and passing it on to his Soviet spymasters (p.1).

     During World War II, the American James Jesus Angleton built a strong working relationship with both Elliot and Philby, working in the counterintelligence section of the Office of Strategic Services that was the direct counterpart to M16’s Section V.  Angleton was a Yale graduate who enjoyed the bonhomie of time spent with Elliot and Philby, trading information in exchanges often fueled by substantial amounts of alcohol. After World War II, Angleton became head of counterintelligence at the CIA. No two spies symbolized the close rapport between British and American intelligence services during the early Cold War than Phllby and Angleton, Macintyre contends.

     The dichotomy and tension between M15, Britain’s Security Service, and M16, its Secret Intelligence Service, runs throughout Macintyre’s story.  Americans can appreciate the differences between the two units, as Macintyre compares M15 to the FBI and M16 to to the CIA. The two services were “fundamentally dissimilar in outlook. M15 tended to recruit former policemen and soldiers, men who sometimes spoke with regional accents and frequently did not know, or care about, the right order to use the cutlery at a formal dinner. They enforced the law and defended the realm, caught spies and prosecuted them” (p.162). M16 by contrast was a prototype upper class Establishment institution, “more public school and Oxbridge; its accent more refined, its tailoring better. Its agents and officers frequently broke the laws of other countries in pursuit of secrets and did so with a certain swagger” (p.162).  But along with this swagger came a tendency in the old boy network that was the M16 not to ask questions about one of their own and to assume that all members of the elite club were what they seemed.

     The extent to which alcohol drove Philby and fueled his exchanges with Elliot, Angleton and other counterparts is astounding. “Even by the heavy-drinking standards of wartime, the spies were spectacular boozers” (p.25), McIntyre notes. In his “Afterword,” Le Carré describes alcohol as “so much a part of the culture of M16” that a non-drinker “could look like a subversive or worse” (p.298).  Indeed, it is difficult to imagine how these spies could have maintained their guard with so much alcohol in their systems.   And, as Macintyre further notes, no one “served (or consumed) alcohol with quite the same joie de vivre and determination as Kim Philby” (p.26).  Alcohol helped Philby “maintain the double life, for an alcoholic has already become divorced from his or her real self, hooked on an artificial reality” (p.215).

       During World War II, Philby provided the Soviet Union with the names of several thousand members of the Nazi resistance movement in Germany, Germans worling with Britain in the hope that a genuine democracy might be established in their country after the war.  Many were rounded up and presumed shot by the Soviets after the Russian conquest of what became East Germany.  After the war, Philby was posted to Istanbul, where he served as head of British intelligence, under the cover of First Secretary at the British Consulate. He served in a similar position in Washington, D.C. From these positions, he furnished the Soviets with a steady stream of invaluable information. As Macintyre emphasizes, Philby not only told his Soviet handlers what Britain’s spymasters were doing; he was also able to “tell Moscow what London was thinking” (p.104). Philby undermined British counter-revolutionary operations in Georgia, Armenia and Albania, with many of the operatives dying in uneven combat.  These were “ill-conceived, badly planned” operations that “might well have failed anyway; but Philby could not have killed [the operatives] more certainly if he had executed them himself.” (p.118).  Their ensuing deaths did not trouble him, then or later.

     There was what Macintyre describes as a “peculiar paradox” to Philby’s double dealing: “if all his anti-Soviet operations failed, he would soon be out of a job; but if they succeeded too well, he risked inflicting real damage on his adopted cause” (p.95).  Philby thus maintained a “pattern of duality” in which he “consistently undermined his own work but never aroused suspicion. He made elaborate plans to combat Soviet intelligence and then immediately betrayed them to Soviet intelligence; he urged ever greater efforts to combat the communist threat and personified that threat; his own section worked smoothly, yet nothing quite succeeded” (p.103).  

     In May 1951, fellow double agents Burgess and Maclean suddenly disappeared, fleeing to Moscow.  Maclean had come under suspicion as a Soviet mole within British intelligence and Philby sent Burgess, who lived with Philby and his wife in Washington, to alert Maclean that he was about to be arrested. Philby had not intended that Burgess himself flee. When he did — which Philby considered an act of betrayal – Philby himself came under suspicion as the “third man,” still another Soviet mole within British intelligence, and was forced to resign from M16.

     Over the course of the next several years, Philby was investigated by both M15 and M16, with M15 taking the position that Philby was a Soviet spy, but without the evidence to prove its case, while M16 remained equally certain of his innocence, but without evidence to exonerate him. Similarly, in Washington, J. Edgar Hoover and the FBI were convinced that Philby was a Soviet agent, whereas Angleton’s CIA defended him. Philby’s case thus remained in limbo for “months and then years,” a “bubbling unsolved mystery, still entirely unknown to the public but the source of poisonous discord between the intelligence services” (p.173).

     In 1955, Foreign Secretary Harold Macmillan agreed with an M16 report that with no hard evidence despite four years of investigation, it would be “entirely contrary to the English tradition for a man to have to prove his innocence. . . in a case where the prosecution has nothing but suspicion to go upon” (p.186). Based upon the report and a subsequent softball M16 interview of Philby – in which, Macintyre speculates, Elliot was likely one of the two interviewers — Macmillan officially exonerated Philby.

     No longer in limbo, Philby resumed work for M16, going to Beirut in 1956 under cover as Middle East correspondent for The Observer and The Economist.  Philby’s nearly seamless return to British intelligence, Macintyre observes, “displayed the old boys’ network running at its smoothest: a word in an ear, a nod, a drink with one of the chaps at the club, and the machinery kicked in” (p.208).  Journalism can be the perfect cover for a spy and double agent, allowing the journalist to ask “direct, unsubtle, and impertinent questions about the most sensitive subjects without arousing suspicion” (p.211). But Philby’s work as a journalist proved to be his undoing.

      What British authorities took as iron clad proof of Philby’s double agency came from Flora Solomon, a prominent Jewish-Russian émigré to Britain who had known Philby since the 1930s (Solomon’s son Peter founded Amnesty International in 1961). Solomon’s main passion by the 1960s was the State of Israel, which she “defended and supported in word, deed, and funds at every opportunity” (p.244).  Solomon became increasingly irritated by what she perceived as anti-Israel and hence pro-Soviet bias to Philby’s Middle East reporting. Almost casually, she reported to another pillar of the Anglo-Jewish community, Lord Victor Rothschild, then Chairman of Marks & Spencer’s who had worked in M15 during the war, that Philby had clumsily tried to recruit her to spy for the Soviet Union in the 1930s.  Rothschild, in turn, reported Solomon’s information to M15.  Solomon’s revelation was the ammunition that M15 had lacked and the evidence of guilt that Philby’s M16 supporters had always demanded.

     Although still not convinced that it had enough evidence to successfully prosecute Philby, M16 sent Elliot to Beirut in January 1963 to extract a confession. M16’s ostensible strategy was to offer Philby immunity from prosecution in return for a full confession.  In a series of tense meetings between the long-time friends, which Macintyre ably recounts based upon secret recordings, Philby became increasingly open about his years of activity as a Soviet agent, even providing the names of Blunt and Carncross as the fourth and fifth Cambridge spies.  Signed confession in hand, Elliot left Beirut.

     Shortly thereafter, Philby failed to appear at an Embassy dinner party, fleeing to Moscow on a Soviet freighter. Elliot, Macintyre writes, “could not have made it easier for Philby to flee, whether intentionally or otherwise. In defiance of every rule of intelligence, he left Beirut without making any provision for monitoring a man who had just confessed to being a double agent: Philby was not followed or watched; his flat was not placed under surveillance; his phone was not tapped; and M16’s allies in the Lebanese security service were not alerted. . . Elliot simply walked away from Beirut and left the door to Moscow wide open” (p.267).

     Elliot later claimed that the possibility that Philby might defect to Moscow had never occurred to him or to anyone else, a claim which “defies belief” (p.266).  But Macintyre suggests that M16 may have deliberately allowed Philby to escape to Moscow. “Nobody wanted him in London” (p.266). Although Elliot had made clear to Philby that if he failed to cooperate fully, the “immunity deal was off and the confession he had signed would be used against him,” the prospect of prosecuting Philby in Britain was “anathema to the intelligence services. . . politically damaging and profoundly embarrassing” (p.266-67).  M16 may have therefore concluded that allowing Philby to join Burgess and Maclean in Moscow was the “tidiest solution all a round” (p.267).

     From the moment he finally understood and accepted Philby’s betrayal, “Elliott’s world changed utterly: inside he was crushed, humiliated, enraged, and saddened.” For the rest of his life, Elliot never ceased to “wonder how a man to whom he had felt so close, and who was so similar in every way, had been, underneath, a fraud” (p.250). Elliot also began to ask himself:

how many people he, James Angleton, and others had unwittingly condemned to death. Some of the victims had names . . . Many casualties remained nameless . . . Elliott would never be able to calculate the precise death tally, for who can remember every conversation, every confidence exchanged with a friend stretching back three decades? . . . Elliott had given away almost every secret he had to Philby; but Philby had never given away his own ( p.249).

Although discredited within British intelligence after Philby’s defection, Elliot remained in the service until 1968.  In the 1980s, he became an unofficial advisor on intelligence matters to Prime Minister Thatcher.  He died in 1994.

     As to Angleton, after Philby’s defection, a “profound and poisonous paranoia” seemed to seize him. In Angleton’s warped logic, “If Philby had fooled him, then there must be many other KGB spies in positions of influence in the West. . . Convinced that the CIA was riddled with Soviet spies, Angleton set about rooting them out, detecting layer after layer of deception surrounding him. He suspected that a host of world leaders were all under KGB control” (p.285-86).  Angleton was forced out of the CIA in 1974, when the “extent of his illegal mole hunting was revealed” (p.287). He died in 1987.

     Philby lived his remaining years, a quarter of a century, in the Soviet Union. The Soviets provided Philby with accommodations and allowed him to live a relatively undisturbed life. But they hardly welcomed him. He was of little use to them by then. In Moscow, Macintyre writes, Philby at times “sounded like a retired civil servant put out to pasture (which, in a way, he was), harrumphing at the vulgarity of modern life, protesting against change . . . He demanded not only admiration for this ideological consistency, for having ‘stayed the course,’ but sympathy for what it had cost him” (p.284). In his last years, he was awarded the Order of Lenin, which he compared to a knighthood.

* * *

     With no apparent remorse and few if any second thoughts about the path he chose to travel during his life’s journey, Philby died in the Soviet Union in 1988.  He was buried in Moscow’s Kuntsevo cemetery, a long distance from Cambridge.

Thomas H. Peebles

La Châtaigneraie, France

June 24, 2016






Filed under British History, European History, History, Soviet Union

Extraordinarily Intense and Abstract



Sudhir Hazareesingh, How the French Think:

An Affectionate Portrait of an Intellectual People 


     You may wince at the title of Sudhir Hazareesingh’s book, How the French Think: An Affectionate Portrait of an Intellectual People.  Attempting to explain in book form “how the French think” seems like an audacious if not preposterous undertaking. Yet, however improbably, Hazareesingh, a professor at Oxford University who also teaches in Paris, somehow accomplishes the daunting tasks he sets for himself: identifying the “cultural distinctiveness of French thinking” (p.3) and showing how and why the activities of the mind have “occupied such a special place in French public life” (p.7).

     In his sweeping, erudite yet highly-readable work, Hazareesingh affably guides his readers through three centuries of French intellectual history. Hazareesingh approaches with light-hearted humor his impossibly broad and – certainly to the French – highly serious subject. He assumes that it is possible to make “meaningful generalizations” about the “shared intellectual habits of a people as diverse and fragmented as the French” (p.17). He is most concerned in presenting selected “meaningful generalizations” about how the French – and particularly France’s intellectual elite — have looked upon the country, its past, its major political institutions, and its place in the larger world.  He places particular emphasis upon the theories and ideas which have sustained France’s political divisions since the 1789 French Revolution.

     Hazareesingh finds French thinking to be both extraordinarily intense and, by Anglo-American standards, extraordinarily abstract. Ideas in France are “believed not only to matter but, in existential circumstances, to be worth dying for” (p.17). He identifies a quintessentially French “fetish” – a term used frequently throughout his book – for “unifying theoretical syntheses and for formulations which are far-reaching and outlandish – and sometimes both” (p.111). The notion of knowledge as “continuous and cumulative, which is such a central premise of Anglo-Saxon epistemology,” is, Hazareesingh argues, “alien to the French way of thinking” (p.21).  French ideas tend to be the product of a form of thinking which is “not necessarily grounded in empirical reality,” giving them a “speculative” character (p.21).

     More than elsewhere, French thinking tends to look at issues as binary choices, between either A or B: nationalism or universalism; individualism or collective spirit; spiritualism or science. French thinking also reserves a special place for paradox, producing passionate rationalists, revolutionary traditions, secular missionaries and, on the battlefield, glorious defeats.  France’s vaunted sense of exceptionalism, which lies in its distinct “association of its own special quality with its moral and intellectual prowess” (p.11), endures today side by side with a pervasive sense of pessimism and decline – malaise.  In the 18th century, French political philosopher Baron de Montesquieu observed that French thinkers had mastered “doing frivolous things seriously, and serious things frivolously” (p.7), and Hazareesingh finds that the same “insouciance of manner” also endures in today’s France.

      Hazareesingh arranges his work into ten chapters, working toward the present. He starts with the influence of 17th century philosopher, mathematician and scientist René Descartes on all subsequent French thinking. Within a Cartesian framework, he then discusses in the next five chapters distinctive 19th century modes of thought in France: exotic sects devoted to mysticism and occultism; the powerful influence of science on 19th century French thinking; the evolution of notions of a political Left and Right; and the emergence of a French view of “the Nation” and French identity toward the end of the century.  Although focused on the 19th century – and in some cases, the 20th century up to the fall of Third French Republic in 1940 – these chapters also address the contemporary presence and influence of the chapter’s subject matter. Each could serve as an informative and entertaining stand-alone essay.

      The chapter on the emergence of the political Left and Right in the aftermath of the French Revolution is both the thread that ties together the book’s chapters on 19th century French thinking and its  link to the final four chapters, on post World War II French political and social thought. These final chapters revolve around the providential leadership style of Charles de Gaulle and the persistent attraction of communism as the heart of the French intelligentsia’s opposition to de Gaulle. Along the way, Hazareesingh discusses a host of post-World War II French thinkers, particularly the ubiquitous Jean Paul Sartre.  He also provides an illuminating overview of the Structuralist movement, which gained great sway in academic circles, especially in American universities, for its grandiose analysis of human culture. Its key thinkers – Claude Lévi-Strauss, Michel Fourcault, Jacques Derrida – seem to personify France’s proclivity for abstract if not obtuse thinking.  In his final chapters, Hazareesingh describes the widespread contemporary French malaise, with French historians and its political intelligentsia looking at the country, its past and future, with a deepening sense of pessimism and despair.

* * *

     In Hazareesingh’s estimation, modern French thinking began in the 17th century with René Descartes and his belief in the primacy of human reason, the “defining feature of the human condition” (p.50). Descartes’ signal contribution was to “accustom men increasingly to found their knowledge on examination rather than belief” (p.33), thereby rejecting arguments based upon religious faith.  The esprit cartésian, “based on logical clarity and the search for certainty” (p.33), rests on the conviction that reason is the “only source of our ability to make moral judgments and impose a durable conceptual order on the world” (p.50).

     The distinction between a political Left and Right, Hazareesingh writes, has often been viewed as a manifestation of the Cartesian character of French thought and its “propensity to cast political ideas in binary terms and to follow lines of reasoning to their extremes” (p.133). The distinction originated in the early phases of the French Revolution, when supporters of the king’s prerogative to veto legislation gathered on the right side of the 1789 Constituent Assembly, while opponents of the royal veto grouped on the Assembly’s left side.  Throughout the 19th century and up to the fall of the Third Republic in 1940, the subsequent debate between Left and Right was “largely between advocates and opponents of the French Revolution itself” (p.136).

     Central to the mindset of the many tribes on the Left during the 19th century was a “belief in the possibility of redesigning political institutions to create a better, more humane society whose members were freed from material and moral oppression” (p.137). This entailed above all establishment of a republican form of government, with power “exercised by elected representatives in the name of the people” (p.137). Political change “could be meaningful only if it was comprehensive and cleansing” (p.143).  The conceptual origins of European socialism and social democracy may be found on the left side of the 1789 Constituent Assembly.

      The 18th century Swiss political philosopher Jean-Jacques Rousseau provided a major share of the conceptual underpinning for France’s Leftist sensibilities.  Rousseau concluded that it was “plainly contrary to the law of nature” that the “privileged few should gorge themselves with superfluities, while the starving multitudes are in want of the bare necessities of life” (p.79-80). Rousseau’s protean political philosophy appealed simultaneously to the “libertarian yearning for absolute freedom, the progressive quest for a better world and the collectivist desire for equality” (p.80). In the mid-19th century, the ideas of Auguste Comte further animated the Leftist vision. One of the 19th century’s “most original standard-bearers of Cartesianism” (p.33), Comte’s comprehensive attempt to unite all forms of scientific inquiry into a single overarching philosophical system inspired a republican faith in education and science as keys to building a progressive, secular and just society.

     The counterpoint to the vision of the French Left was shaped by Edmund Burke’s Reflections on the Revolution in France (discussed here in May 2015 in a review of Yuval Levin’s The Great Debate, Edmund Burke, Thomas Paine, And the Birth of Right and Left).  Burke’s Reflections constituted “such an iconic representation of anti-1789 sentiment that copies were burned in bonfires by revolutionary peasants” (p.138). Like Burke, the political Right in France defended the entrenched institutions that the French Revolution sought to uproot — notably, monarchy, aristocratic privilege, and the Catholic Church – and stridently resisted the democratic and republican impulses of the Left. The language of the Right was “typically about the avoidance of conflict, the defense of hierarchy, the appeal to tradition and religious faith. . . the Right was predominantly concerned with the preservation (or restoration) of social stability” (p.141).

     In the first half of the 19th century, the most fervent proponents of the Right’s conservative vision were Catholic traditionalists and the royalists who never relinquished their dream of a restoration of the Bourbon monarchy. Hazareesingh credits the ultra-royalist polemicist Joseph de Maistre with encapsulating the Right’s aversion to everything associated with the 1789 Revolution. De Maistre saw the events of the 1790s as a “manifestation of divine retribution for decades of French irreligiosity and philosophical skepticism” (p.138). The notion  of universal rights of man was to de Maistre a “senseless abstraction.”  De Maistre is best known to history for his observation that he had “seen Frenchmen, Italians, Russians. . . but as to man, I have never met one” (p.138).

      A central theme in the mythological imagination of the Right in the latter half of the 19th century was the “presence of sinister forces working to unravel the fabric of French society.” These destructive agents were “all the more noxious in that they were often perceived to represent alien interests and values” (p.150).  Jews in particular came to be identified as posing the ultimate existential menace to traditional conservative ideals, as manifested in the notorious affair involving Alfred Dreyfus, the Jewish Army officer wrongly convicted of spying for Germany in 1896 (three books on the Dreyfus Affair were reviewed here in 2012).  In the 20th century, the French political Right contributed to the “genesis of fascist doctrine” in Europe (p.147). The demise in 1944 of the collaborationist Vichy regime that ruled much of France during the years of German occupation marked the effective end for this traditional, counter-revolutionary French Right.


* * *

      After World War II, two developments reshaped the schism between Left and Right: the emergence of a “new synthetic vision of Frenchness, centered around Charles de Gaulle, and the entrenchment of Marxist ideas among the intelligentsia” (p.191). In their “schematic visions of the world after the Second World War, and in their bitter opposition to each other,” Gaullists and Marxists, “symbolized the French capacity for intellectual polarization and their apparent relish for endlessly reproducing the older divisions created by the Revolution” (p.196).

     De Gaulle modernized French conservative thought by “incorporating more fraternal ideals into its scheme of values, notably, by granting voting rights to women and, later, ending French rule in Algeria” (p.192). Although his leadership revolved around his own charismatic persona as the incarnation of the grandeur of France — echoing Napoleon Bonaparte – De Gaulle was also relentlessly pragmatic.  He “did not hesitate to discard key elements of the heritage of the French Right, especially its hostility to republicanism and its xenophobic, racialist and anti-egalitarian tendencies” (p.192).

     The French intelligentsia’s “extraordinary fascination” with communist theory was “born out of the First World War and its apogee in France between the 1930s and the ‘60s coincided with one of the most troubled periods in the nation’s modern history” (p.102). Although ostensibly identifying with the Soviet Union as a model of governance, French communism “remained deeply rooted in [France’s] historic political culture” (p.107). Through the 1960s, communism offered its intellectual adherents a “way of experiencing the values of friendship, human solidarity and fraternity” (p.107).

     Throughout the post-War period, Jean Paul Sartre dominated the French intellectual landscape. The “flamboyant personification of the French ‘intellectual,’” Sartre combined high visibility interventions in the political arena with an “original synthesis of Marxism and existentialism” and a “commitment to revolution, ‘the seizure of power by violent class struggle’” (p.230). After Sartre’s death in 1980 and the election of reformist Socialist President François Mitterrand in 1981, Hazareesingh observes a change in the tone of the discourse between the political Left and Right.

      The ideals at the heart of Sartre’s “redemptive conception of politics – communism, revolution, the proletariat – lost much of their symbolic resonance in the 1980s,” Hazareesingh indicates. Marxism “ceased to be the ‘unsurpassable horizon’ of French intellectual life as the nation elected a reformist socialist as its president, the Communist Party declined, the working class withered away and the Cold War came to an end” (p.236).   By the time Mitterrand was elected in 1981, the “division between Left and Right was already beginning to decline. . . the Right had moved away from its republican rejectionism . . . [and] the Left completed the movement in the 1980s by abandoning the universalist abstractions that underpinned progressive thought: the belief in human perfectibility and the sense that history had a purpose and that capitalist society could be radically overhauled” (p.158).

* * *

        Today, France grapples with a “growing sense of unease about its present condition and its future prospects” (p.21), the French malaise. The factors giving rise to contemporary malaise include the decline of the French language internationally, coupled with France’s diminished claim to be a world power. But since the late 1980s, France’s pervasive pessimism seems most closely linked to issues of multi-culturalism and integration of France’s Muslim population.  Like every European nation with even a modest Muslim population, how to treat this minority remains an overriding challenge in France.  Few thinkers. Left or Right, are optimistic that France’s Muslim population can be successfully integrated into French society while France remains true to its revolutionary republican principles.

     Hazareesingh sees the rise of France’s nationalistic, xenophobic National Front party, originally headed by Jean-Marie Le Pen and now by his estranged daughter, Marine Le Pen, as not only a response to the pervasive sense of French national decline but also a telling indication of the diminished clout of today’s political intelligentsia.  He chastises the “collective inability of the intellectual class” over the past decade to “confront the rise of the Front National and the growing dissemination of its ideas among the French people — a silence all the more remarkable as, throughout their history, and notably during the Dreyfus Affair, French intellectuals were at the forefront of the battle against racism and xenophobia. It is a measure of the disorientation of the nation’s intellectual and cultural elites on this issue that some progressive figures now openly admit their fascination with Jean-Marie Le Pen” (p.256-57).

* * *

     Despite the doom and gloom that he perceives throughout contemporary France, Hazareesingh concludes optimistically that in facing the challenges of the 21st century, it is “certain” that the French will “remain the most intellectual of peoples, continuing to produce elegant and sophisticated abstractions about the human condition” (p.326). Let’s hope so – and let’s hope that Hazareesingh might again provide clear-headed guidance for English-language readers on how to understand these sophisticated abstractions, as he does throughout this lucid and engaging work.


Thomas H. Peebles

La Châtaigneraie, France

June 9, 2016





Filed under France, French History, History, Intellectual History, Political Theory, Politics, Uncategorized

The 22-Month Criminal Partnership That Turned the World On Its Head



Roger Moorhouse, The Devils’ Alliance:
Hitler’s Pact With Stalin, 1939-41 

     On August 23, 1939, Nazi Germany and the Soviet Union stunned the world by executing a non-aggression pact, sometimes referred to as the “Ribbentrop-Molotov” accord after the foreign ministers of the two countries.  The pact, executed in Moscow, seemed to come out of nowhere and was inexplicable to large portions of the world’s population, not least to German and Soviet citizens. Throughout most of the 1930s, Nazi Germany and Soviet Russia had vilified the other as its archenemy.  Hitler came to power in Nazi Germany in no small measure because he offered the country and especially its privileged elites protection from the Bolshevik menace emanating from the Soviet Union. Stalin’s Russia viewed the forces of Fascism and Nazism as dark and virulent manifestations of Western imperialism and global capitalism that threatened the Soviet Union.

     In his fascinating and highly readable account of the pact, The Devils’ Alliance: Hitler’s Pact With Stalin, 1939-41, Roger Moorhouse, an independent British historian, writes that the “bitter enmity between the Nazis and the Soviets had been considered as a given, one of the fixed points of political life.  Now, overnight, it had apparently been consigned to history. The signature of the pact, then, was one of those rare moments in history where the world – with all its norms and assumptions – appeared to have been turned on its head” (p.142). Or, as one commentator quipped at the time, the pact turned “all our –isms into –wasisms” (p.2).

     According to Hitler’s architect Albert Speer, when the Fûhrer learned at his mountain retreat that Stalin had accepted the broad outlines of the proposal Ribbentrop carried to Moscow, Hitler “stared into space for a moment, flushed deeply, then banged on the table so hard that the glasses rattled, and exclaimed in a voice breaking with excitement, ‘I have them! I have them!’” (p.35). But Moorhouse quotes Stalin a few pages later telling his adjutants, “Of course, it’s all a game to see who can fool whom. I know what Hitler’s up to. He thinks he’s outsmarted me but actually it’s I who has tricked him” (p.44).

    Which devil got the better of the other is an open and perhaps unanswerable question. For Germany, the pact allowed Hitler to attack Poland a little over a week later without having to worry about Soviet retaliation and, once Poland was eliminated, to pursue his aims elsewhere in Europe without a two-front war reminiscent of Germany’s situation in World War I up to Russia’s surrender after the Bolshevik revolution.  The conventional view is that for the Soviet Union, which had always looked upon war with Nazi Germany as inevitable, the pact at a minimum bought time to continue to modernize and mobilize its military forces.

     But, Moorhouse argues, Stalin was interested in far more than simply buying time. He also sought to “exploit Nazi aggression to his own ends, to speed up the fall of the West and the long awaited collapse of the West” (p.2). The non-aggression agreement with Nazi Germany provided the Soviet Union with an opportunity to expand its influence westward and recapture territory lost to Russia after World War I.  The pact ended almost exactly 22 months after its execution, on June 22, 1941, when Hitler launched Operation Barbarossa, the code name given to the German invasion of the Soviet Union. But during the pact’s 22-month existence, both Hitler and Stalin extended their authority over wide swaths of Europe.  By June 1941, the two dictators — the two devils — between them controlled nearly half of the continent.

* * *

     As late as mid-August 1939, Soviet diplomats were pursuing an anti-Nazi collective defense agreement with Britain and France. But Stalin and his diplomats suspected that the British and the French “would be happy to cut a deal with Hitler at their expense” (p.24).  Sometime that month, Stalin concluded that no meaningful collective defense agreement with the Western powers was feasible. Through the non-aggression pact with Nazi Germany, therefore, Stalin preempted the British and French at what he considered their own duplicitous game. Three days prior to the signing of the non-aggression pact, on August 20, 1939, Berlin and Moscow executed a commercial agreement that provided for formalized exchanges of raw materials from the Soviet Union and industrial goods from Germany. This agreement had been in the works for months and, unlike the non-non-agression pact, had been followed closely in capitals across the globe.

     The non-aggression pact that followed on August 23rd was a short and in general non-descript document, in which each party guaranteed non-belligerence to the other and pledged in somewhat oblique terms that it would neither ally itself nor aid an enemy of the other party.  But a highly secret protocol accompanied the pact  — so secret that, on the Soviet side, historians suspect, “only Stalin and Molotov knew of its existence” (p.39); so secret that the Soviet Union did not officially acknowledge its existence until the Gorbachev era, three years after Molotov had gone to his grave denying the existence of any such instrument.  The protocol divided Poland into Nazi and Soviet “spheres of interest” to apply in the event of a “territorial and political rearrangement of the area belonging to the Polish state” (p.306).

     The accompanying protocol contained similar terms for Finland, Lithuania, Latvia, and Estonia, anticipating future “territorial and political rearrangements” of these countries. The protocol also acknowledged Moscow’s “interest in” Bessarabia, the eastern portion of today’s Moldova, then part of Romania, for which Germany declared its “complete disinterest” (p.306). For Stalin, the pact and its secret protocol marked what Moorhouse terms an “astounding success,” in which he reacquired a claim to “almost all of the lands lost by the Russian Empire in the maelstrom of the First World War” (p.37). Moorhouse’s chapters on how the Soviets capitalized on the pact and accompanying secret protocol support the view that the Soviet and Nazi regimes, although based on opposing ideologies, were similar at least in one particular sense: both were ruthless dictatorships with no scruples inhibiting territorial expansion at the expense of less powerful neighbors.

* * *

       After Nazi Germany invaded Poland from the west on September 1, 1939 (eight days almost to the hour after execution of the pact), the Soviet Union followed suit by invading Poland from the east on September 17th. The Nazi and Soviet occupiers embarked upon a “simultaneous cleansing of Polish society,” with the Nazis motivated “primarily by concerns of race and the Soviets mainly by class-political criteria” (p.57).  Moorhouse recounts in detail the most chilling example of Soviet class cleansing, the infamous Katyn Forest massacre, where the Soviets methodically executed approximately 21,000 Polish prisoners of war – high-ranking Army officers, aristocrats, Catholic priests, lawyers, and others, all deemed “class enemies.” Stalin attributed the massacre to the Nazis, and official acknowledgement of Soviet responsibility did not come until 1990, one year prior to the Soviet Union’s dissolution.

     The Soviet Union browbeat Estonia into a “mutual assistance” treaty that, nominally, obligated both parties to respect the other’s independence. Yet, by allowing for the establishment of Soviet military bases on Estonian soil, the treaty “fatally undermined Estonian sovereignty. Estonia was effectively at Stalin’s mercy” (p.77). Similar tactics were employed in Lithuania and Latvia. By mid-October 1939, barely six weeks after signing the pact, Stalin had “moved to exercise control of most of the territory that he had been promised by Hitler” in the secret protocol, “securing the stationing of around 70,000 Red Army troops in the three Baltic states, a larger force than the combined standing armies of the three countries” (p.78). By August 1940, each Baltic state had become a Soviet constituent republic.

     The Soviet Union also invaded Finland in November 1939 and fought what proved to be a costly winter war against the brave Finns, who resisted heroically. The war demonstrated to the world – and, significantly, to Nazi Germany itself – the weaknesses of the Red Army.  It ended in a standstill in March 1940, with Moscow annexing small pieces of Finnish territory, but with no Soviet occupation or puppet government. The Soviet Union also wiped out Bessarabia. Although the secret protocol had explicitly recognized Soviet interest in Bessarabia, Hitler saw the Soviet move as a “symbol of Stalin’s undiminished territorial ambition.” Though he said nothing in public, Moorhouse writes, “Hitler complained to his adjutants that the Soviet annexation of Bessarabia signified the ‘first Russian attack on Western Europe’” (p.107).

      In the same timeframe, Hitler extended Nazi domination over Norway, Denmark, Holland, Luxembourg and Northern France, as well as much of Poland, some 800,000 square kilometers.  Hitler and Stalin thus divided up Europe in 1940, with Nazi Germany becoming the preeminent power on the continent. Stalin “did less well territorially, with only around half of Hitler’s haul at 422, 000 square kilometers, but was arguably better placed to actually absorb his gains, given that all of them were long standing Russian irredentia, with some tradition of rule from Moscow and all were neatly contiguous to the western frontier of the USSR” (p.106).

    Hitler’s concerns about the extent of Soviet territorial ambitions in Europe after its annexation of Bessarabia were magnified when the Soviets also demanded nearby northern Burkovina, a small parcel of land under Romanian control, nestled between Bessarabia and Ukraine. Northern Burkovina was Stalin’s first demand for territory beyond what the secret protocol had slated for Moscow. By late summer of 1940, therefore, the German-Soviet relationship was in trouble. The “mood of collaboration of late 1939 shifted increasingly to one of confrontation, with growing suspicions on both sides that the other was acting in bad faith” (p.197).

    In November 1940, Soviet Foreign Minister Molotov was summoned to Berlin to try to breathe new life into the pact. Hitler and Ribbentrop made a concerted effort to head off westward Soviet expansion with the suggestion that the Soviet Union join the Tripartite Pact between Germany, Italy and Japan and focus its territorial ambitions to the south, especially on India, where it could participate in the “great liquidation of the British Empire” (p.215).  Ribbentrop’s contention that Britain was on the verge of collapse was called into question when certain meetings with Molotov had to be moved to a bunker because of British bombings of the German capital.

    Molotov left Berlin thinking that he had attended the initial round in what were likely to be lengthy additional territorial negotiations between the two parties.  In fact, the November conference marked the end of any meaningful give-and-take between them. In its formal response back to Germany, which Molotov delivered to the German Ambassador in Moscow, the Soviet Union made clear that it had no intention of abandoning its ambitions for westward expansion into Europe in exchange for membership in the Tripartite Pact. No formal German response was forthcoming to  Soviet demands for additional European territory. Rather, the often-vacillating Hitler had by this time made what turned out to be an irrevocable decision to invade the Soviet Union, with the objective of turning Russia into “our India” (p.295).

* * *

    In the period leading up to the invasion in June 1941, Stalin refused to react to a steady stream of intelligence from as many as 47 different sources concerning a German build up near the Western edges of the new Soviet empire.  Stalin was obsessed with not provoking Germany into military action, “convinced that the military build up and the rumor-mongering were little more than a Nazi negotiating tool: an attempt to exert psychological pressure as a prelude to the resumption of talks” (p.229). Stalin seemed to believe that “while Hitler was engaged in the west against the British, he would have to be mad to attack the USSR” (p.230).

    But ominous intelligence reports continued to pour into Moscow. One in April 1941 concluded that Germany had “as many as one hundred divisions massed on the USSR’s western frontier” (p.238). In addition, over the previous three weeks, there had been eighty recorded German violations of Soviet airspace. “Such raw data was added to the various human intelligence reports to come in from Soviet agents . . . all of which pointed to a growing German threat” (p.238).  Still, Stalin “did not believe that war was coming, and he was growing increasingly impatient with those who tried to persuade him of anything different” (p.239).

    In the early phases of Operation Barbarossa, German troops met with little serious resistance and were able to penetrate far into Soviet territory.  In many of the areas that the Soviets had grabbed for themselves after execution of the pact, including portions of the Baltic States, the Germans were welcomed as liberators. The Soviet Union incurred staggering loses in the immediate aftermath of the invasion, losing much of the territory it had acquired as a result of the pact.

     Minsk, Bessarabia’s largest city, fell into German hands on June 28, 1941.  Its fall, Moorhouse writes, “symbolized the wider disaster not only for the USSR, but for Stalin personally.” It was the “moment at which his misjudgment was thrown into sharp relief. Only a dictator of his brutal determination – and one with the absolute power that he had arrogated for himself – could have survived it” (p. 273).  Moorhouse’s narrative ends with the Germans, anticipating an easy victory, not far from Moscow as 1941 entered its final months and the unforgiving Russian winter approached.

* * *

      Moorhouse contends that the 1939 Nazi-Soviet non-non-aggresson pact has largely been glossed over in Western accounts of World War II, which focus on the fall of France and Britain’s lonely battle against the seemingly invincible Nazi military juggernaut during the  22-month period when the Soviet Union appeared to be aligned with Germany against the West.   To the degree that there is a knowledge gap in the West concerning the pact and its ramifications, Moorhouse’s work aptly and ably fills that gap.

Thomas H. Peebles
La Châtaigneraie, France
May 13, 2016


Filed under Eastern Europe, European History, German History, History, Soviet Union

Remarkable Life, Remarkably Sad Ending



Rachel Holmes, Eleanor Marx, A Life

     Karl Marx’s third and youngest daughter Eleanor, born in 1855, became the successor to her father as a radical analyst of industrial capitalism. But she was also an instrumental if under-appreciated force in her own right in the emergence of social democracy in Victorian Britain and internationally in the late 19th century. Her remarkable life, as Rachel Holmes writes in her comprehensive biography, entitled simply Eleanor Marx, A Life, was “as varied and full of contradictions as the materialist dialectic in which she was, quite literally, conceived . . . If Karl Marx was the theory, Eleanor Marx was the practice” (p.xvi). Holmes, a cultural historian from Gloucestershire, England, who specializes in gender issues, characterizes Eleanor as the “foremother of socialist feminism” (p.xii).  She emphasizes how Eleanor supplemented her father’s work by defining for the first time the place of women in the working class struggles of the 19th century.

     But in conventional (Karl) Marxist thinking, the personal and the political are never far removed and they are ever so tightly intertwined in Holmes’ account, which focuses heavily on interactions within the Marx family circle. In the last third of the book, Holmes provides heartbreaking detail on how the three closest men in Eleanor’s life betrayed her: her father Karl; her father’s collaborator and Eleanor’s life-long mentor, Friedrich Engels; and her common law husband, Edward Aveling. The collective burden of these three men’s betrayal drove Eleanor to an apparent suicide in 1898 at age 43.

     Adhering to a chronological format, Holmes writes in a light, breezy style that, oddly, is well suited to bear the book’s heavy themes. Nearly everyone in the Marx family circle had nicknames, which Holmes uses throughout the book, adding to its informal flavor. Eleanor herself is “Tussy,” her father is “Möhr,” and her mother Jenny is “Möhme.” Eleanor had two sisters, Laura and Jenny, the latter referred to as “Jennychen,” little Jenny.  Jennychen died two months prior to father Karl in 1883. Two older brothers and one sister failed to survive infancy.

     The Marx family’s inner circle also included Engels, “the General,” and its long-time and exceptionally loyal servant, Helen Dumuth, “Lenchen.” Engels, the son of a rich German industrialist with substantial business interests in Manchester, was Marx’s life-long partner and benefactor and akin to an uncle or second father to Eleanor. Lenchen, whom Holmes describes as “history’s housekeeper” (p.342) and the keeper of the family secrets, followed the Marx family from Germany to Britain and shared the progressive values of Eleanor’s parents. Lenchen and Eleanor’s mother Jenny were childhood friends and remained remarkably close in adulthood.

    Lenchen had a son, Freddy, four years older than Eleanor, who “grew up in foster care with minimal education” (p.199). As Eleanor grew older, she gradually intuited that Engels was Freddy’s father, although Freddy’s paternal origins were never mentioned within the family, least of all by Engels himself, who always seemed uncomfortable around Freddy. Freddy resurfaced in the tumultuous period prior to Eleanor’s untimely death, when he became Eleanor’s closest confidant — almost a substitute for her two brothers whom she never knew.

* * *

    By the time Eleanor was born in 1855, her father Karl was already famous as the author of important tracts on the coming Communist revolution in Europe. Banished from his native Germany as a dangerous radical, Marx took refuge in Britain. The household in which Eleanor grew up, “living and breathing historical materialism and socialism” (p.47), was disorderly but still somehow structured. Father Karl was notorious for being unable to balance his family’s budget, and was consistently borrowing money. Much of this money came from Engels.

    Eleanor came of age just prior to the time when British universities began to admit women, and she was almost entirely home-schooled and self-educated. Yet, the depth and range of her learning and intellectual prowess were nothing short of extraordinary. With her father (and Engels) serving as her guides, Eleanor started reading novels at age six, and went on to teach herself history, politics and economics. She also had an amazing facility for languages. The only member of the family who could claim English as a native language, Eleanor mastered German, her parents’ native language, then French, and later other European languages, most notably Russian. She became a skilled translator and interpreter, producing the first English language translation of Flaubert’s Madame Bovary.

    By her early twenties, Eleanor had demonstrated exceptional organizing skills that her father lacked, along with genuine empathy for the plight of working families (which her father also lacked). The more pragmatic Eleanor seemed to be in all places where workers gathered and sought to organize. She supported dock and gas workers’ unions and their strikes. She became actively involved in London education policy, Irish Home Rule, the evolution of Germany’s Social Democratic Party, and the campaign in France for amnesty for the revolutionaries of the 1870-71 Paris Commune.

     Eleanor’s work in mobilizing trade unions provided impetus to the emergence of the Independent Labor Party in the early 1890s, Britain’s first democratic socialist political party. Her work clarified that for Eleanor and her socialist colleagues Marxism was a revolutionary doctrine in the sense that it demanded that people think in boldly different terms about capitalism, the industrial revolution, and the workers who fueled the capitalist system.  But it was also a doctrine that rejected violent revolution in favor of respect for the main tenets of liberal (“bourgeois”!) democracy, including elections, parliamentary governance and the rule of law.  Her views crystallized as she and her colleagues battled with anti-capitalist anarchists, who did not believe in any form of government. Eleanor saw “no way of squaring anti-democratic anarchism with democratic socialism and its commitment to work within a representative parliamentary system” (p.397), Holmes writes. Eleanor Marx was more Bernie Sanders than Bolshevik.

     While involved in organizational activities, Eleanor maintained an abiding interest in the theatre.  Unlike her first class talent for organizing workers, her acting abilities were modest. Shakespeare and Ibsen were Eleanor’s particular interests among major playwrights, whose works contained messages for her on going organizing activities. Given her organizational skills, Holmes thinks that Eleanor would have made a brilliant theater director. But such a position was closed to women in her day. Instead, her “theatre for creating a new cast of radical actors in English art and politics” was the recently opened British Museum Reading Room, “its lofty dome a metaphor for the seat of the brain, workplace for writers and thinkers” (p.182). Here, in the aftermath of her father’s death in 1883, Eleanor wrote books and articles about her father, becoming his “first biographer and posthumous exponent of his economic theory” (p.195). All subsequent Marx biographers, Holmes indicates, have based their accounts on the “primary sources supplied by Eleanor immediately after her father’s death” (p.196).

     The Reading Room was also the venue where Eleanor first met Edward Aveling, an accomplished actor from comfortable circumstances who became a socialist and Eleanor’s common law husband. Aveling proved himself to be a monstrous villain whose malevolence and treachery dominate the last third of the book, with Aveling the central character in a story that has the intricacy of a Dickens plot coupled with psychological probing worthy of Dostoevsky,

* * *

      Holmes describes Aveling as an “attractive, clever cad who played a significant role in popularizing Darwin and steering British secularists towards socialism. It’s easy to see why his anti-establishment, anti-religious, anti-materialist turn of mind appealed to Eleanor” (p.195). But Aveling was also a con artist and the author of a seemingly endless series of scams, stunningly skillful in talking people — Eleanor among them — into loaning him money that was rarely if ever repaid. Eleanor “failed to recognize that his character was the projection of a consummate actor” (p.195), Holmes argues.

     Aveling was further a first rate philanderer, with a steady stream of affairs, most frequently with young actresses or his female students. Although these dalliances made Eleanor “emotionally lonely,” she came to accept them. Eleanor and Edward were proponents of what was then termed “free love,” but the freedom was all on Edward’s side.  The net result, Holmes writes, was that Eleanor took on the “aspect of conventional stoical wife and Edward of conventional philandering husband” (p.238).

    Marx and Aveling jointly published a seminal work on women in the social democratic movement, “The Woman Question: From A Socialist Point of View,” probably the only positive product of their relationship. “The Woman Question” made “absolutely clear,” Holmes writes, that the “struggle for women’s emancipation and the equality of the sexes is a prerequisite for any effective form of progressive social revolution” (p.262). Marx and Aveling aimed in their landmark essay to show that “feminism was an integral necessity, not just a single aspect or issue of the socialist working-class movement, and that sexual inequality was fundamentally a question of economics” (p.260). Aside from their genuine collaboration on “The Woman Question,” just about everything in the fourteen-year Aveling-Marx relationship was negative.

     Holmes documents how Eleanor’s family and friends privately expressed doubt about Aveling and his suitability for Eleanor. Toward the end of her shortened life, they were expressing these doubts directly to Eleanor. The couple did not marry because Aveling reported to Eleanor that he was still legally married to another woman who was “emotionally unstable, difficult, vindictive and refused to divorce him” (p.420).  In fact, Aveling schemed to preserve the marriage to inherit his wife’s estate should she die. When she died, Aveling hid this fact from Eleanor over the course of five years. Finally, Aveling simply walked away from Eleanor and the house they kept together, “without explanation, pocketing all the cash, money orders and movable values he could find” (p.415), to marry a young actress named Eva Frye.

     When Eleanor learned of Aveling’s marriage sometime during the final days of March 1898, she was “confronted by the fact that Edward, after all his fine words about free love and open unions being as morally and emotionally binding as marriage under the law, was simply a liar. And she was a gull, a fool who had willingly suspended her disbelief – because she loved him” (p.420). One of the books’ most puzzling mysteries is why Eleanor, with her keen awareness of women’s vulnerability and their potential for mistreatment from men in what she saw as a rigidly patriarchal society, stayed so long with Aveling. Holmes finds an answer in the deeper recesses of what she terms Eleanor’s “cultural ancestry,” which presented her with the:

questionable example of loyal, dutiful wives and mothers. The formative examples of her Möhme and “second mother” Lenchen, both utterly devoted to her father, shaped her attitude to Edward. Unintentionally, Tussy’s mothers were dangerous, unhelpful role models, ill-equipping their daughter for freedom from subordination to romantic illusions (p.227).

     Eleanor’s frentic final weeks were marked by  desperate correspondence with Freddy, Engels’s putative son. Realizing that a codicil to a will she had executed a few years earlier left most of her estate to Aveling, Eleanor wrote to Freddy that she was “so alone” and “face to face with a most horrible position: utter ruin – everything to the last penny, or utter, open disgrace. It is awful; worse than even I fancied it was. And I want someone to consult with” (p.418).

     Eleanor executed a second codicil, reversing the earlier one and leaving her estate to her surviving sister, nieces and nephews. The codicil was in an envelope addressed to her lawyer, undelivered on the morning of March 31, 1898. That morning, after a vociferous argument with Edward, Eleanor sent her housekeeper Gertrude Gentry to the local pharmacist with a sealed envelope requesting “chloroform and small quantity of prussic acid for dog” (p.431-32).  The prescription required a signature to be returned to the pharmacy.  Aveling was in the house when the housekeeper left to return the signature to the pharmacy, Holmes asserts, but when the housekeeper returned the second time, she found only Eleanor, lifeless in her bed, wearing a summer dress she was fond of.  Aveling had by then left the premises.

    What Aveling did that day and why he left the house are among the many unanswered questions surrounding Eleanor’s death. The death was officially ruled a suicide after a slipshod coroner’s hearing, the second codicil was never given effect, and Aveling inherited Eleanor’s estate. Many, including Aveling’s own family, were convinced that Aveling had “murdered Eleanor by engineering her suicide” (p.433). Calls for Aveling to be brought to trial for murder, theft and fraud followed  him for the following four months, but were mooted when he died of kidney disease on August 2, 1898.

* * *

      If Aveling’s duplicity was the most direct causative link to Eleanor’s apparent suicide, the revelation in Eleanor’s final years of an astounding betrayal on the part of her long-deceased father and Engels, at a time when Engels was dying of cancer, almost certainly contributed to Eleanor’s decision to end her life. But I will refrain from divulging details of the dark secret the two men had maintained with the hope that you might scurry to Holmes’ thoroughly-researched and often riveting account to learn all you can about this remarkable woman, her “profound, progressive contribution to English political thought – and action” (p.xi), and the tragic ending to her life.

Thomas H. Peebles
La Châtaigneraie, France
April 28, 2016


Filed under Biography, British History, English History, History, Politics