U.S. Intellectual History Blog

Diagnosis: Intellectual Historian (Guest Post by Daniel Goldberg)

[The following is a guest post by Daniel Goldberg.]

Thanks much to LD Burnett and the entire gang here at USIH Blog for permitting me to wade on in and muck things up for everybody again. I promise to return the car with only minor dents and scratches. In my first post here – almost a year ago! – I self-identified as an intellectual historian, but noted that the vast majority of historians of medicine and public health write mostly social and cultural history. In my second post, I want to pick up this point and mull (gripe about?) the challenges of writing intellectual history for social and cultural historians.

Unlike most of the contributors and the readers here, I did not set out to be an intellectual historian. When I realized my passion for history, and my great good fortune in landing at an interdisciplinary graduate program with a proud tradition in the history of medicine and several excellent mentors on board, I embarked on a passionate program of reading, writing, and revising, primarily in the history of medicine and public health. Naturally, the majority of this work involved social history, which I quickly adored for its emphasis on bottom-up historiography and its rejection of the Great Man history that characterized at least some of its earlier traditions.

When I arrived at my current position, I had a manuscript I had been working on for some time, and I brought it to one of my new colleagues, a senior and extremely well-respected social historian of medicine. When we sat down to discuss the paper, the first thing he said to me was “You’re an intellectual historian!” Honestly, I had no earthly idea that this was so, but I also sensed intuitively that he was absolutely right (and my exposure to history and philosophy of science on the one hand and the writings of William Bouwsma on the other meant that I was not entirely in the dark as to what characterizes intellectual history).

And so, I began to own and self-identify as an intellectual historian, and to try as best I could to put that ownership into practice in my approach, my research questions, and my writing as well. (The MS was published, too!). But I now find myself in the somewhat peculiar position of writing an entirely different genre of history from the dominant methodologies to which members of my professional historical community are accustomed. There is a dissonance here, and I began, for several reasons, to wonder if this dissonance presented any kind of a professional (First World) problem. I presumed this was simply plain old junior faculty paranoia – until several trusted and senior colleagues concurred that it could indeed present some difficulties.

If you will forgive the navel-gazing thus far, it brings me to what I take to be an important issue: what counts as good evidence may differ across subfields even within the same discipline. This is one of my major challenges as a scholar trying to write responsible historiography that contributes to a larger conversation. Note that the issue is not only possible dissonance between subfields regarding what counts as good evidence in proving a respective claim in each, but also relates to the standards by which peers assess whether a quantum of proof has been supplied that is sufficient to ground the author’s thesis. In my short academic career, I have been the beneficiary of astounding good fortune in my peer reviews, and have generally received excellent and helpful critical feedback on most of my work. And I dislike the idea of a general maxim by which a jilted author asserts that “the reviewer simply did not get what I was trying to do” even where such is obviously true in some cases.

But on several occasions with submitted manuscripts, the feedback essentially centered on the reviewer’s belief that the sources on which I drew and the frameworks in which I situated them simply did not satisfy the arguments to which I was committed. And on these occasions, I often felt that the reviewer and I disagreed on what standard of proof was necessary to substantiate a claim made in an intellectual history context, and on whether the texts and kinds of evidence with which I was working were sufficient to satisfy said standards.

A major portion of my faculty development energies focus on writing pedagogy, and I spend quite a bit of time with the great folks in my University Writing Program, many of whom are trained in rhetoric & composition. One of the interesting concepts to which I have been exposed in this process is the notion of a discourse community. There is a lot of literature on this, but one of the primary experts is John Swales, and he identifies 6 characteristics of a discourse community:

  1. Broadly agreed set of common goals
  2. Mechanisms of intercommunication among members
  3. Participatory mechanisms used to provide feedback
  4. Utilization of genres
  5. Specific lexis
  6. Threshold level of expertise required of members.

It is IMO interesting to think about the possibility that even within subfields of history, different methodologies and approaches may not constitute the same discourse community. Or, alternatively, perhaps historians of medicine and public health are indeed part of the same discourse community, but use slightly different genres (social/cultural vs. intellectual history) in that discourse.

In any event, the rubber really hits the road for me on the question of evidence. I am preoccupied these days with the question of how one identifies what quantum of proof is needed to ground sufficiently a claim made within the history of medicine but from the standpoint of intellectual history. And if the audience for whom I am writing, no matter how brilliant, have simply not internalized the types and standards of evidence marshaled to satisfy arguments made in intellectual history, how does one proceed to convince skeptical readers?

I have been fortunate to find a few junior historians of medicine who either write or at least share my affinity for intellectual history, and I was invited to submit to a panel addressing precisely this question at OAH (which I had to decline because I could not attend). So I am lucky that I am not alone in facing some of these challenges, which are both fun and a bit frightening to engage, truth be told.

What do you all think?

4 Thoughts on this Post

  1. Daniel–
    Very interesting post. Can you be a little more specific about the different evidentiary standards that social/cultural historians have vs. what they perceive to be the limitations or insufficiency of the sources of intellectual historians? You offer this up as a difference, but it seems pretty clear that what you are saying is that the dominant social/cultural historiography sees _insufficient_ sourcing in intellectual history practice, not _different_ sourcing to answer different kinds of questions. You, as an intellectual historian, however, never suggest that you think the social/cultural historians are insufficient in their sources–you seem to accept the idea that the sources they use are adequate to the questions they are trying to answer. So we have a collision–they (the social/cultural historians) have a hierarchical conception in which their sources are better than yours, and you have a different fields/different sources conception. Or at least that’s how I read you. You never say, for instance, that they make claims about changes in thought based on sources that show changes in behavior or changes in social organization–in other words they infer that people are thinking differently because they are behaving differently. In my experience, social and cultural historians sometimes do do this, and think their claims are strong because they are based on multiple archival sources, even though the sources don’t address or express changes in thought per se. From my point of view, this would be inadequate sourcing, whereas I expect that they might find your use of textual sources to show changing ideas as insufficiently grounded in institutional records, for instance? Perhaps you could unpack this a little more, and explain more concretely what the difference between the sourcing is and whether the question is one of adequacy, difference, or some combination of the two.

  2. Hello Dan,

    Begin gush:

    First off, let me just say that it is an honor to have your perspectives on my post. I’m of course familiar with your work, and really appreciate your points here.

    /End gush

    Second, let me answer your question directly: Respectfully, I do not think I can be any more specific about the different evidentiary standards the social/cultural historians have vs. intellectual history, but I do think I can offer some thoughts on your really probing questions. I do not for a moment mean to suggest that the question is an empty one, but rather that the standards are to some extent determined by the relevant community practice. I read methods texts and sources, write drafts, subject them to workshops and peer review by other members of the community – here, intellectual historians – and over time hopefully get a better internalized sense of what quantum of evidence is sufficient to generate sustainable claims in intellectual history.

    I imagine you would have some much better answers to this question, and I would love to read them!

    But I am not trying to sidestep the issues on which I think you are really focusing. I think you are absolutely right to point out the hierarchy in the way I framed the post. To some extent, this probably reflects the political economy in the subfield. Crudely put, it’s the social and cultural historians dojo; again, I want to be perfectly clear that I am not lamenting that fact for a moment – I love social history (of medicine). I do not offer this as any kind of justification, but perhaps as an explanation for the hierarchy you observe.

    I also think your point that sometimes social and cultural historians issue claims for changes in thought from evidence of changes of behavior is illuminating, and I am going to have to watch for this closely in the subfield.

    Interestingly, the feedback I have received on some of my work from social/cultural historians has not simply been limited to concerns that the sources used do not justify the claims made for changes in thought or for the influence on behavior of a particular set of ideas (and here I try to be careful to document both changes in intellectual models and subsequent changes in behaviors. Often enough, I try to assemble a critical mass of evidence from which we can justifiably assert that no explanation for the changes in behavior is complete without accounting for the relevant changes in ideas and concepts). Indeed, my interlocutors have voiced these issues, especially because, as you suggest, I work a lot with professional journals, treatises, and correspondence between physicians, in which I think we can see the operation of key concepts as well as changes in various constellations of ideas. Although I work very hard to avoid using these sources in the service of Great Man History, it is also possible that the fact that such sources have been used in such ways is a concern. But on several other occasions, it seems as if the editors and readers have simply had no idea what to do with the work, because it looks so vastly different from much, albeit not all, of the excellent work that is being published in the subfield.

    I am not sure if I have illuminated much, but I do take your point about the hierarchical conception in my post (and, I suppose in my thinking). It is a bit embarrassing, because much of my professional life has related to engaging deep questions about evidence, both in practice and in theory. As an attorney, I practiced mass tort litigation, which centered on questions of proof, causation, and evidence. And as an historian, I have written on and continue to develop my work on the ways in which ideas of Truth, evidence, and proof change in medicolegal culture due to changing models of objectivity. Hoisted on the petard, I guess . . .

    Thanks again for the insights and questions!

  3. Daniel–
    Thanks for your response (and your kind words!). I think you’re largely right that community standards are internalized within fields, so that we recognize what counts as evidence and what can and cannot be drawn from a source, in ways that are not made explicit. But those community standards become explicit when members decide to draw a line and have to justify why they won’t accept a particular account, or find it insufficiently sourced. That is, saying “insufficient” isn’t sufficient, unless all of your interlocutors agree, in which case no explicit argument is necessary. But when you have clashing standards and a lack of consensus between two sub communities, unless one party has the institutional power to peremptorily dismiss the other’s account, then some explicit and conscious justification of why this or that account is insufficient would seem to be necessary. Now, it may be that the social/cultural historians have all the power in your field (they edit the journals, do the peer-reviewing, sit on editorial boards, run the conferences, etc.) and they can afford to say “this is insufficient” without challenge from other authorities (in this case, intellectual historians). But it sounds like that’s not exactly the case, and when it’s not, I would expect to see people striving to make explicit and articulate what counts as adequate documentation for a claim. The result may be simply the “different fields/different sources” conclusion, in which case we see the publication of work in intellectual history that uses a different (not lesser) standard. But if the claim is “our standard is superior to yours,” and we refuse to, say, accept a claim about changing ideas about mental illness based on published writings, rather than, say details of clinical practice, then the intellectual historian is shut out at the very place where he or she is claiming expertise. Which is more about the power of one sub community to enforce a norm for the entire community.

    At some level, I imagine, all historians are part of a community of discourse–the level at which we draw the line between what counts as history and what doesn’t. But within the community, there are obviously different subfields and standards and expectations within those subfields. My preference for how we organize the relationships between those subfields is one in which larger groups don’t police the claims of smaller ones, or claim that their standards are the only ones, or confuse the standard of their subgroup for the standard of the larger community of historians. In other words, intellectual historians shouldn’t have to justify their source use to people who hold that intellectual history source use is inherently inadequate. But, if as an intellectual historian, you are seeking to challenge what historians have claimed in the social/cultural realm (understood that the lines between these things are pretty porous), you have to play by home team rules–No intellectual history designated hitter when you’re playing in the social/cultural history home park.

    Thanks again for an interesting post.

  4. Daniel (Goldberg),

    I would need to know more specifics about your critics (and your mss) to understand their claim that your writing (whether in terms of evidence or analysis) did not meet their standards. But let’s go basic for a bit before hitting your perceived issues:

    As we all know, evidence can work in in many ways, ways in which we all trained in graduate school (whether social or intellectual historian). First there is the question of primary or secondary. One can get in trouble for excessive use of either (all primary and no sense of historiography, or all secondary with no firm grounding in letters, papers, etc.). Next there is the diversity of works under either category. Primary includes letters, autobiographies, archived papers, visual sources, oral histories, newspaper accounts, books and articles from the period, etc. Secondary includes oral histories, regular histories, incomplete papers, biographies, etc. There is some overlap, but there are some clear distinctions to made in many cases.

    Then there is the question of the claims based on these pieces of (good) evidence. Does one’s claim(s) hue closely to the evidence, or does it speculate further out (i.e. too far out for some). The latter might get one an intellectual historian into trouble if she/he extrapolated too far from singular sources of thought, or from small historical discourse communities. I don’t deal with social historians regularly, but I can see issues with allowing for “related ideas.” One would then need to back up claims about the extrapolation of ideas with sources from newspapers other other books with the needed citations—all within historical context. I don’t know where social historians cut off related ideas from main ideas. I think intellectual historians are more forgiving on that front, or at least more willing to be speculative.

    To recap, on differences between social and intellectual historians, it might not be so much about “evidence” as analysis and extension. I would expect that the social historians wouldn’t differ from us in terms of TYPE of evidence, but whether on *how* we extrapolate. They often like to see data sets and some quantification. Intellectual historians will take that evidence too, but we are also okay with seeing widespread fragments of core ideas such that the effects are “evident” (if not one-for-one, word-for-word clear). This is how Dan Rodgers’ *Age of Fracture* worked—some quantification here and there, but ubiquitous evidence of qualitative fragmentation across numerous fields and institutions. Fracture has many synonyms and many variations (by nature). Maybe Rodgers book isn’t the best example. But there are many works of intellectual history where “related” evidence might not appear superficially connected to those not acquainted with some deeper currents (e.g. all the variation of ‘modernity’, and ‘individualism’ and ‘freedom’).

    My two cents. – TL

Comments are closed.