U.S. Intellectual History Blog

The Fun & Risk of Being Wrong

For my dissertation research, I read my fair share of sociology monographs from the 1950s and 1960s. Although unsure of what to expect at the start, I had in my mind something approximating a combination of social analysis with the quantitative approaches so common in sociology today.[1] Instead, many of the books I read were so sparse on evidence or even references to studies providing it that it felt a bit like they were getting away with something. Of course, there were footnotes with nods to previous work – not unusually their own – and many of the books referred back to a couple of famous and ubiquitous studies, such as William F. Whyte’s Street Corner Society. However, on the whole, they read more like rough drafts of proposed arguments rather than robust defenses of them. It was if they formulated an idea in their heads, read a bit of secondary literature on it, from which they gleaned a few historical examples, typed that all up and presto!, there was the finished product.

I have to admit to having ambiguous feelings about this. My first gut reaction was to think, “not fair! If this was all that was required I could have three books published by now.” That, perhaps, is the response of someone who, having undergone a hazing, now desires to inflict it on others, even retroactively! On the other hand, it seemed incredibly seductive, and led me to perhaps (ok, probably) nostalgic speculations about the world of academia before a glutted job market resulted in an incredibly competitive environment where empiricism works as a major strategy for convincing hiring committees that your project is the most promising (ie, correct) project. (And how much time have you spent in an archive, lately?)

Indeed, there seemed to be a freewheeling, nonchalant attitude about their work which I found appealing – it was as if they were all sitting in a room, simply tossing out ideas, without anyone feeling particularly defensive about or invested in being right. Would this not create an intellectual environment more interested in creative thinking than competitive posturing? Imagine if every conference paper or article ended with an implied shrug of, “then again, I could be wrong.” How would this change the environments we think and work in?

Moreover, I have long appreciated the value of a bold and completely wrong idea. Think of how many epic historians have come up with brilliant, useful, compelling arguments that also happened to be false – Fredrick Jackson Turner for example, or Richard Hofstadter.[2] What such historians accomplished was not always to get it right, but to force others to take creative and critical account of their arguments, thus spurring, in the process, more daring and groundbreaking scholarship. As I’ve often joked to friends, I would be perfectly content to have my life’s work assigned for decades in graduate seminars, not in spite of being wrong, but in some sense because of being wrong.

Then again, perhaps such an attitude plays too fast and loose with the truth. After all, is this not the overblown confidence of elite white men casually discussing the problems of the world, apparently in desperate need of their expert advice, unconcerned with having to rigorously check their assumptions? Indeed, any approach to sociological speculation that could earn Daniel Patrick Moynihan the (lasting, oddly enough) reputation as being a serious thinker has got to be seriously flawed. (On that note, the best summary of Moynihan’s work I’ve ever read comes from Francis Fox Piven, who wrote in a review of his Maximum Feasible Misunderstanding, “[a]s always with Moynihan, it is difficult to distill the main ideas from the dazzling verbiage with which he flits from accounts of specific events and persons to half-completed generalizations about, say, the crisis of Western civilization in the mid-twentieth century.”[3]) More seriously, as was detailed in Trevor Burrow’s review of Mical Raz’s new book on poverty as cultural deprivation, such practices might have had something to do with the willingness of an entire generation of social scientists and social workers to accept ideas about poverty that were neither based in historical evidence nor rigorously tested. Once someone writes something down as though it is true and in no need of elaborate defense, it is very easy for others to pick it up on the mistaken assumption that the original theorist built it on a solid foundation, and such developments can have very serious consequences.

Yet I still wonder if there isn’t a way to get if not the best, then a bit of both worlds – because even when you read, for example, Moynihan, you might not always understand at first what the hell he is talking about, but you are enjoying the ride – and an intellectual environment based as much in the joy of creative thinking as the importance of being correct sounds sometimes deliciously subversive and well, just plain fun. But is there a place for intellectual play in our increasingly cramped and competitive workspaces, or is it all too risky to flirt with being wrong?

[1] At least, the ones that I read; I would not be surprised if this was related to the particular fields I was looking into, primarily research on juvenile delinquency, poverty, and community dynamics.

[2] Hofstadter was not wrong about everything, of course – there was much on which he was quite right – but suffice to say there are serious problems with his account of populists, progressives and the source of contemporary conservatism.

[3] Francis Fox Piven, “Whose Maximum Feasible Misunderstanding?,” in Social Work, Vol.14, No. 2 (April 1969), 96, Francis Fox Piven Papers, Box 74, Folder 18, Sophia Smith Collection (Northampton, Massachusetts).

6 Thoughts on this Post

  1. You wrote: “I would be perfectly content to have my life’s work assigned for decades in graduate seminars, not in spite of being wrong, but in some sense because of being wrong.”

    Perfect. And I agree completely.

    The issue you’re taking up seems to be an old one in the history profession, at least—i.e. the difference between history as a humanistic/humanities endeavor (e.g. less strict exploration and citation of archival evidence) and history as a social science (e.g. strict adherence to researching primary documents, citations, incorporation in long conversation/historiography, etc.). Obviously many works exist on a continuum between both poles.

    What we want today, I think, is to be wrong (or right!) in line with latter—using all the current standards of evidence and style! That will make us rigorous and exacting; less like the freewheeling white elite men of old! – TL

    • Yeah I can’t help thinking that this approach is deeply connected to the fact that they were elite white men, so confident in their status as the intellectuals of their society as to be somewhat exempt from having to, you know, provide evidence for their claims. And in general, to be so lazy on this stuff is kind of a form of arrogance; as if to say, you don’t need too much evidence from me, just bask in the brilliance of my formulations here!

  2. This is an interesting discussion to someone like myself, who holds a position in a literature dept. and often attends conferences with literary scholars. With respect to my field, I often yearn for precisely the opposite tendency: I wish fewer literary scholars would adopt “a freewheeling, nonchalant attitude about their work” and stop “simply tossing out ideas, without anyone feeling particularly defensive about or invested in being right” (and by right, you mean “empirically correct,” I think). From my disciplinary perch, most literary scholars are terribly afraid of someone calling their work “undertheorized,” but they are not terribly afraid of someone telling them to consult additional archives or to comb back through the available sources. They usually do not attempt to get things “right,” but rather to say something provocative, apparently radical, theoretically bold, etc. They essentially do exactly what you describe: “formulate an idea in their heads, read a bit of secondary literature on it, from which they glean a few historical examples, type that all up and presto!, there was the finished product.” I confess that I’m probably guilty of doing this too on occasion.

    This is one reason I am so grateful for historical profession’s entrenched habits of empiricism–most historians do, in the end, ask about the nature of the evidence, and they (generally speaking) care about getting it “right.” On average, intellectual historians tend to be more theoretically engaged than historians as a whole, and that’s why they often seem to me to occupy a kind of sweet spot of historical inquiry (to be reductive, they really care about both theory and evidence). Maybe I’m idealistic here, but that’s what I see displayed on this blog week after week.

    If you want to expose yourself to some freewheeling, speculative, minimally documented claims about the past, I encourage you to spend a few days at the MLA annual convention and report back as to whether you find the majority of presentations intellectually exhilarating or terribly lax in their standards of historical argumentation. In any case, I just wanted to offer an alternative p.o.v. on this discussion from someone who occupies a disciplinary space that claims to honor rigorous “historicism,” but rarely concerns itself with building extensive empirical foundations for historical arguments. (It’s funny, there have been several recent arguments in literary studies about the field having become too historicist–when, in fact, I often think it’s not historicist enough!).

    • This summarizes well why I feel ambiguous about the more lax kind of approach; I agree that the good stuff is somewhere in-between, in a sweet spot between pure empiricism and careless theory. If I did spend a weekend at the MLA, I imagine I would have something like the reaction you describe, although I can see myself laughing rather than crying at it, and meanwhile having a great time. That’s where my ambiguity comes in; I don’t think it is “good for me,” so to speak, to enjoy or participate in that level of abstraction, but I do enjoy myself. Guilty pleasure?

  3. It’s interesting the way time alters perspectives. It sounds as if you were reading monographs from the older WASP sociologists who’d dominated sociology and were clinging to the ramparts of their Ivy towers. Meanwhile roughly in that time period sociologists like Daniel Bell, Seymour Martin Lipset, and Robert Merton were striving to make their mark on the profession. They weren’t “elite white men” to start with; they began as Jewish outsiders. Similarly even an Anglophile like Moynihan started as a Catholic outsider from Hell’s Kitchen, at a time when religion and ethnicity was more important than now. From today’s perspective the differences fade and their commonalities seem large.

    Coates on Moynihan: http://www.theatlantic.com/national/archive/2013/06/revisiting-the-moynihan-report/276936/

    • There is a decent amount of truth to these points, but I think they are, nonetheless, a little overstated. First, although I by no means intend to downplay the role of anti-semitism at this time period, I also think that this was just the moment where it became possible for some Jews to be folded into the category of whiteness — indeed, the fact that many Jewish intellectuals got to these positions of influence and acceptance and then didn’t want to lose it is maybe one reason, it has been suggested, why some of them gravitated towards neoconservativism as the 60s went on. As for Moynihan, his Hell Kitchen’s origins are not untrue but have been overstated and played up, and although I grant you the point about the importance of religion, being Catholic (or Jewish) did not obliterate the privileges of being white and male for these men.

      As for Bell & Lipset, they do better on the footnote front — Political Man, for example, is rife with references to other scholarship. However, many of their basic theories were based on very little research into the dynamics they were pronouncing on; as Michael Paul Rogin picked apart in his book The Intellectuals and McCarthy, Lipset, Bell, & other pluralists assumed certain political alignments and inclinations without really bothering to check if the support for McCarthy and similar extreme conservatives was coming from where they said it came from. When they did use data — such as with noting totalitarian sympathies amongst the working class in poll data — they assumed an obvious link between the way certain questions were answered and support for a particular segment of the Republican Party, although, as Rogin unpacked, the majority of McCarthy’s supporters came from the traditional conservative middle-class enclaves that were not new comers to a fragile status but protectors of an old one.

      As for Moynihan, I have my own take on him:

      https://www.jacobinmag.com/2015/03/moynihan-report-fiftieth-anniversary-liberalism/

Comments are closed.