“We have been the thing against which normality, whiteness, and functionality have been defined.” Robin D. G. Kelley
In 1993, Rudolph Giuliani ran for mayor of New York City on the campaign slogan: “One Standard, One City.” As Kevin Baker later wrote in a scathing profile of Giuliani’s 2008 bid for the presidency, this motto “implied that somehow black New Yorkers were getting away with something under a black mayor,” Giuliani’s foe, David Dinkins. In other words, Giuliani’s call for universal standards was shot through with racial assumptions.
In the late 1980s and early 1990s, racial tensions flared in New York City to a degree unusual even for a city notorious for racial conflict. In 1986, as one of many examples, a group of white teenagers brutally assaulted three black men whose car had broken down in Howard Beach, a white ethnic enclave of Queens. One of the black men, Michael Griffith, was hit by a car and killed while attempting to flee the white mob. Widespread protests even compelled Dinkins’s predecessor Ed Koch—not exactly beloved by black New Yorkers—to compare the incident to a lynching. Such racial violence was captured by Spike Lee’s acclaimed 1989 film, Do the Right Thing. That most of Lee’s characters represented depthless racial stereotypes seemed appropriate, given that late-twentieth-century Americans were conditioned to think about racial identity in zero-sum terms.
But while this “lynching” weighed heavily on the minds of black New Yorkers, white New Yorkers were more likely to believe that racial tensions impinged upon their ability to move about the city freely. In this context, Giuliani’s campaign promise to restore order to the “Ungovernable City,” a phrase popularized by a 1961 Nathan Glazer Commentary article, traded on a collective siege mentality in which white New Yorkers, “already harassed at every turn by squeegee men, trash storms, and peddlers,” as Baker wrote, “were on the verge of losing control of the city entirely—maybe even at the precipice of some sort of apocalyptic racial massacre.” Despite the fact that Dinkins had already begun systematically suppressing low-level crime (or rather, the visible effects of poverty), the narrative of disorder rose to the status of moral panic. As Richard Cohen argued in The Washington Post: “Aside from the deranged, there’s not a single Gothamite who thinks it has gotten better under Dinkins—no matter what his statistics say.” On the watch of a permissive black mayor, blacks had gotten away with murder. It was time to restore “one standard” for “one city” (while apparently ignoring other standards, like “statistics”).
And so it goes in Colorblind America.
Title VII of the 1964 Civil Rights Act, because it barred preferential treatment in addition to discrimination, technically forbade affirmative action of the racial quota type. The Civil Rights Act, in this way, enshrined as national policy Martin Luther King’s celebrated words about people being “judged not by the color of their skin but by the content of their character.” America was to be post-race.
It didn’t work out that way, of course.
In the immediate aftermath of the Civil Rights Act, most companies continued their usual practice of exclusively employing whites. They could legally rationalize this practice by virtue of the colorblind principle of “merit,” while conveniently ignoring that merit, like most standards, was highly subjective. They could ignore that merit, in many instances, was embedded in the history of a nation that had enslaved black people as recently as 140 years ago (“two 70 year-old ladies, living and dying back-to-back,” as Louis CK put it to Jay Leno with his usual perceptive wit). Colorblindness rested on the fantasy that we could break from this history; that it did not “weigh like a nightmare on the brains of the living.”
The urban riots of the sixties startled many Americans out of their colorblind fantasies. In response to the riots, Lyndon Johnson assembled a National Advisory Commission on Civil Disorders, which issued an influential report—the Kerner Report—recommending color-conscious affirmative action. Johnson, heeding such advice, ditched the colorblind approach to enforcing Title VII. Since all taxpayers—white, black and everyone else—also funded building projects, the federal government would ensure that all taxpayers be hired to build those projects. Such logic was applied to higher education as well. Since all taxpayers—white, black and everyone else—subsidized American universities, all taxpayers would be given access to them. Such access required race-conscious affirmative action.
Conservatives, of course, were never happy with race-conscious affirmative action. Neoconservatives were the first and most vociferous critics, particularly as affirmative action applied to higher education. In 1968, political scientist John Bunzel authored a critical article for The Public Interest about the newly formed Black Studies program at San Francisco State College, where he taught. Bunzel worried that Black Studies would intensify the groupthink tendencies he believed were inherent to Black Power and other identity-based movements, and that it would “would substitute propaganda for omission,” “new myths for old lies.” But to Bunzel, the worst idea put forward by Black Studies was that high standards codified racial discrimination and would therefore need to be revised or dumped altogether. This premise worked at two levels. First, Black Studies scholars believed that their knowledge must be created from scratch in order to undo the scholarly reproduction of racist norms. Thus, they often, at the outset, eschewed footnotes, peer review, and other traditional practices that, to Bunzel, ensured “one standard” of scholarly excellence. Second, Black Studies advocates desired admission quotas in order to ensure that the majority of the students who majored in Black Studies were in fact black. Nathan Hare, the first director of the San Francisco State College program, even argued that college applicants should provide photos. “How else,” he asked, “are we going to identify the blacks?” In response to this reasoning, Bunzel asked a question of his own, exemplary of the colorblind rhetoric that shaped the conservative critique of affirmative action for decades to come: “Is color the test of competence?”
Throughout the 1970s, conservatives grew increasingly more outspoken in their opposition to affirmative action. They considered it anti-American, or, in Senator Orrin Hatch’s words, an “assault upon America, conceived in lies.” Reagan’s anti-affirmative action rhetoric, though of the kinder sort, carried a lot of weight: “I’m old enough to remember when quotas existed in the U.S. for the purpose of discrimination. And I don’t want to see that happen again.” In his 1975 book, The Morality of Consent, Yale jurist Alexander Bickel wrote: “Discrimination on the basis of race is illegal, immoral, unconstitutional, inherently wrong, and destructive of democratic society. Now this is to be unlearned, and we are told that this is not a matter of fundamental principle but only a matter of whose ox is gored.” This legal defense of colorblindness guided the Reagan administration’s efforts to overturn affirmative action.
Reagan’s civil rights appointments set the tone of his administration’s approach to affirmative action and other civil rights laws. Clarence Thomas was tapped to head up the Equal Employment Opportunity Commission (EEOC) and William Bradford Reynolds was named Assistant Attorney General for Civil Rights. Both were known to disparage “race-conscious affirmative action,” Reynolds especially, calling it “morally wrong.” With Thomas and Reynolds taking the lead, the Reagan administration chipped away at affirmative actions enforcement. For example, it let thousands of firms off the hook by raising the cap on the size—from 50 to 250 employees— that such firms were to be required to comply with federal affirmative action regulations. Under Thomas’s direction, the EEOC quit seeking to identify patterns of discrimination. Instead, Thomas merely pressed a few cases of individual discrimination, including some supposed cases of “reverse discrimination.”
During his second term, Reagan stepped up his assault on affirmative action. The Justice Department ordered 53 cities to shut down their affirmative action policies. In defense of such policies, Reagan invoked Martin Luther King for his vision of a colorblind America, claiming that his administration was committed to “a society where people would be judged on the content of their character, not the color of their skin.” Wrapping measures that would limit black employment in King’s rhetoric of racial cooperation was ironic in all sorts of ways, especially since Reagan had a history of racially inflammatory remarks. In August 1980, shortly after winning the Republican nomination, he gave a speech in the small town of Philadelphia, Mississippi, where three civil rights activists had been gruesomely murdered in 1964, notoriously pronouncing, “I believe in states’ rights.” Although his supporters claimed such words bespoke of his libertarianism, given the location of the speech, and his choice of words, Reagan’s rhetoric was an obvious if tacit appeal to southern white voters. But in another ironic twist, all but 3 of the 53 cities refused Reagan’s orders to halt affirmative action. Most firms, as well, voluntarily kept their affirmative action hiring policies in place. It turned out that a multicultural workforce was good for business, or, as Gary Gerstle put it: “In ways that Karl Marx would have well understood, the practices of multinational capital constitute the materialist underpinnings of multiculturalist forms of imagining.”
Late twentieth-century American intellectual life had its own history of colorblindness. Saul Bellow’s infamous question—“Who is the Tolstoy of the Zulus?”—might best be thought of as the literary equivalent of Giuliani’s “One Standard, One City” slogan. “One Standard, One Canon.” “One Standard, One Academy.” Allan Bloom, whose Closing of the American Mind made him the nation’s most famous defender of “one standard,” thought the denial of philosophic and aesthetic truth helped “highly ideologized” students marshal identity politics onto campus. Paraphrasing academic doctrine, he rhetorically asked: “Who says that what universities teach is the truth, rather than just the myths necessary to support the system of domination?” Bloom specifically regretted the prominent argument that standards merely offered cover for institutional forms of racism. Such thinking, he contended, validated claims that blacks were less successful on campus due to deeply rooted power structures. “Black students are second-class not because they are academically poor but because they are forced to imitate white culture,” Bloom wrote, with no effort to hide his sarcastic tone.
Bloom’s antipathy to Black Power was informed by his experience while teaching at Cornell University during the campus upheaval of 1969, when militant black students infamously brandished guns to magnify their demands for affirmative action and the implementation of Black Studies. As gadfly writer Christopher Hitchens later wrote: “Chaos, most especially the chaos identified with pissed-off African Americans, was the whole motif of The Closing of the American Mind.” In other words, Bloom was not only concerned about the philosophical anarchy that marked a relativistic culture. He was also anxious about the more tangible disorder that seemingly overwhelmed universities in the wake of sixties liberation movements. Bloom’s Arnoldian view of culture (“the best that has been thought and said”) reflected, as Corey Robin puts it in The Reactionary Mind, “the excellence of a world where the better man commands the worse.”
As many readers will no doubt be aware, my muse for writing this essay is the recent controversy sparked by Nils Gilman’s guest post—“What is the Subject of Intellectual History?” For better or worse, I am partly responsible for the hullabaloo. After Nils left a provocative comment on Ben’s post (“Irrational Thought and American Intellectual History”) about how intellectual history should be more narrowly conceived of as the history of “coherent thought,” I asked him to elaborate on this point in the form of a guest post. He obliged with what I thought was a coherent, if polemical, defense of a particular way of thinking about intellectual history. I strongly disagree with Nils’s normative argument about limiting the types of sources intellectual historians use. Like many of you, I have a much more capacious understanding of intellectual history, and besides that, I am not invested in disciplinary boundary work. Along these lines, I expected that our smart commentariat would push back against Nils, and I was not disappointed. However, I did not expect that Nils would be tarred as elitist, sexist, and racist. In retrospect, this was surprisingly naïve of me—I say surprisingly because I am writing a history of the culture wars. Of all people, I should have been more attuned to the larger context into which Nils was inserting his neo-Arnoldian arguments. Upon such reflection, the eloquent response to Nils offered by Edward Blum practically wrote itself.
That said, I don’t think Nils’s approach is necessarily elitist (or racist and sexist). In part, this is because I’ve read his book, Mandarins of the Future: Modernization Theory in Cold War America, which is framed in anti-elitist ways. Perhaps it’s also because I feel that, as academics, we should read each other’s arguments in good faith, however much we might disagree with them. We shouldn’t assume the worst. Of course, Nils didn’t do himself any favors. His gendered examples (Honey Boo Boo, Jenna Jameson) and metaphors (“virgins in a whorehouse”) seemed explicitly designed to enflame. As did his pairing of Kant with illiterate slaves. In attempting to mollify those who labeled him a racist for his Zulu remarks, Saul Bellow also hurt his own cause when he told a reporter that he was merely “speaking of the distinction between literate and preliterate societies.”
For me the saddest part about the debate is that, as Dan Wickberg noted, “the heat has tended to overwhelm the light.” I say this because I think there are serious questions that are far from answered. As I asked in the comments thread: Is there a way to defend distinction, in the Arnoldian sense, without also defending hierarchy (class/race/gender)? Can the canon be diversified? Or can it only be destroyed?
In light of how “one standard”—colorblindness—has made it more difficult to achieve racial equality in practice, these are particularly thorny questions.
The conservative argument against the cultural turn—an argument for “one standard”—often worked as a proxy for a hierarchical vision of society. And yet, even with such an awareness, plenty of historians with leftist (anti-hierarchical) political commitments fretted about a lack of concern for “telling the truth about history,” as Joyce Appleby, Lynn Hunt, and Margaret Jacob titled their 1994 book. These three historians charged relativistic, antinomian cultural historians with neglecting the longstanding purpose of historical craft: shedding light on truth. Eschewing such a position, what Charles Taylor referred to as a “subjectivist, half-baked neo-Nietzscheanism,” Appleby, Hunt, and Jacob argued in favor of “a democratic practice of history [that] encourages skepticism about dominant views, but at the same time trusts in the reality of the past and its knowability.”
Significant political stakes were involved in such a fight against epistemological anarchists. “It is as if higher education was opened to us—women, minorities, working people,” worried Appleby, Hunt, and Jacob, “at the same time that we lost the philosophical foundation that had underpinned the confidence of educated people.” The struggle for representation was, in part, a struggle for intellectual authority, the very premise of which was undermined by relativistic theories of history and power. With this in mind, the authors of Telling the Truth About History believed a calculated if limited defense of traditional historical practice necessary. “Rather than underlining the impossibility of total objectivity or completely satisfying causal explanation, we are highlighting the need for the most objective possible explanations as the only way to move forward, perhaps not in a straight line of progress into the future, but forward toward a more intellectually alive, democratic community, toward the kind of society in which we would like to live.”
In other words, standards matter. The obliteration of standards will not help obliterate hierarchy. As Toni Morrison put it: “The people who invented the hierarchy of ‘race’ when it was convenient for them ought not to be the ones to explain it away, now that it does not suit their purposes for it to exist.”