U.S. Intellectual History Blog

History By Numbers

The abandoned Michigan Central Station is seen in Detroit, MichiganI bet many readers of this blog have seen or heard of the remarkable piece of journalism by the Detroit Free Press on the financial mess that has engulfed the city those reporters cover.  Tom Sugrue posted a link to the piece on facebook and, of course, he has also written a tremendous book on the decline cities, The Origins of the Urban Crisis.  The research that went into the Freep piece struck many, including Tom, as an excellent example of historical methods being applied smartly and carefully to a contemporary problem.  It was also an example of quantitative or empirical history–something I don’t do much of and something I would like to hear our readers weigh in upon.

In a piece for the Percolator blog at the Chronicle of Higher Education, Marc Parry quantifies the rise and fall and rise again of history by numbers.  The upshot of the piece is that while “cliometrics” fell out of fashion by the 1990s, today “cliodynamics” is on the rise.  With the apparent interest in digital humanities that many of us share (if somewhat cautiously and skeptically), I am not surprised that new ways of using, analyzing, and presenting data is remaking the perception of quantitative history.

I have posted something similar to this question before and I remember Tim Lacy pointing to Sarah Igo’s book, The Averaged American.  But I am not asking about how historians have looked at the use of empirical knowledge but how they use it and how they think the use of it has changed with the development of digital history.  In other words, will we see a renaissance of quantitative history that will overwhelm the somewhat negative connotation that field has had?

7 Thoughts on this Post

S-USIH Comment Policy

We ask that those who participate in the discussions generated in the Comments section do so with the same decorum as they would in any other academic setting or context. Since the USIH bloggers write under our real names, we would prefer that our commenters also identify themselves by their real name. As our primary goal is to stimulate and engage in fruitful and productive discussion, ad hominem attacks (personal or professional), unnecessary insults, and/or mean-spiritedness have no place in the USIH Blog’s Comments section. Therefore, we reserve the right to remove any comments that contain any of the above and/or are not intended to further the discussion of the topic of the post. We welcome suggestions for corrections to any of our posts. As the official blog of the Society of US Intellectual History, we hope to foster a diverse community of scholars and readers who engage with one another in discussions of US intellectual history, broadly understood.

  1. Ray, I’m glad you posted this. I have been thinking about one particular “cliodynamic” tool that is becoming more ubiquitous lately, the Google ngram.

    There are some things about ngrams I don’t know, and there are some things I don’t like, and there’s probably some overlap between them.

    Here’s what I don’t know:

    1) Is the output of an ngram query adjusted for number of occurrences of a term as a percentage of total words in print, or as a percentage of total words in print for works of a particular kind? Without such an adjustment — or the option to make one — the directional change of an ngram line seems meaningless.

    2) Does the google ngram query eliminate duplicate copies / editions of books?

    Here’s what I don’t like: the increasing reliance of historians on a private, third-party for-profit company to gather, store, and manage information in such a way that is adequate for the work of constructing a sound historical argument.

    The flaws with Time on the Cross were flaws of the historians’ designing. But with Google ngrams, historians are relying on someone else to design a tool that is supposed to make statistical sense of textual evidence. This strikes me as putting too much trust in Google to “get it right” methodologically.

    Just my two cliophilic, technoskeptic cents.

  2. Thanks for the kind words about my work. As someone trained, in part, by various “new social historians,” I lament the decline in historians’ ability to conduct quantitative research, to use data in their own work, or to assess and critique quantitative scholarship, especially in other social sciences. But my work is also strongly influenced by political, cultural, and intellectual histories. Hard data can be very useful for historians, but it is only one of many sources to make sense out of the past, and often not the best. The limits of quantitative research are clear in the DFP article on Detroit’s bankruptcy. The authors trace the ebb and flow of tax dollars, pension obligations, and public spending in Detroit. For its clear presentation of those data, the article is indispensable. I don’t know of another recent journalistic account of urban finance that is so grounded in historical research. But it’s the first word, not the last. The statistics tell a story, but also lend themselves to a limited understanding of causality that downplays state and federal policies that affected Detroit, shifting ideologies about government’s role in urban life, and the reshuffling of power between cities and suburbs. To read Detroit’s fiscal data, it’s important to consider the rise of conservatism, the limits of liberalism, the racialization of urban space, the rise of market-based understandings of economics and politics, the collapse of union power, and the consequences of societal fracture. Quantitative scholarship is a start, but the histories of politics, ideology, political economy, race, and culture are necessary for the full picture.

  3. Thanks to LD and Tom for their comments–two very smart historians poking at a pretty interesting beast. I had two intentions behind my post and they relate to the two comments. First, as LD suggests, there is an uncritical amazement that follows the use of charts and numbers that I find silly–especially because it seems that including color in these charts magically makes the math behind them more substantial. Second, as Tom nails it, quantitative studies are ubiquitous and useful and those of us who write for various publics need to use and understand these methods. However, as was suggested in the really interesting stream of comments that appeared underneath Tom’s initial post on facebook, there are ideas and trends and human decisions that created the numbers that the DFP followed in their story. In an era when the general notion of liberal arts education seems constantly in need of a defense and the elevation of STEM as the savior of general education has displaced the liberal arts, it seems to me that many of us have more than a mere scholarly interest in how quantitative methods can be harnessed for our collective fields.

    I am interested in hearing from folks who have examples of collaboration among scholars who demonstrate how streams of analysis come together with quantitative methods and examples of work by authors who do this themselves. Obviously, I think Tom Sugrue is one clear example. I know the discipline of Political Science has suffered (I think that is the apt term) a schism over quantitative methods and theory, and sociology might have a similar problem. History has not, mostly because it doesn’t seem that empirical research has divided departments or searches for new faculty. The divide might be along how we understand and use digital methods (digital humanities) but again, I am interested in hearing from folks about how they view these issues.

  4. James Livingston, in “Against Thrift”, built his arguments on econometrics demonstrating that net investment from after tax profits have declined since 1919. He used this information to demonstrate that consumption was more important to the economy than investment. I think this quantification strengthened his case. I think that Steve Roth assisted him in the collation of the numbers.

  5. Fascinating piece.

    Love the comment from Tom Sugrue: a great distillation of what seems like the most profitable approach to blending the quantitative and the qualitative in historical research.

  6. Detroit Free Press: ? Taxing higher and higher

    ? Reconsidering Coleman Young

    ? Downsizing — too little, too late

    ? Skyrocketing employee benefits

    ? Gifting a billion in bonuses

    ? Missing chance after chance

    ? Borrowing more and more

    ? Adding the last straw — Kilpatrick’s gamble

    Sounds rather like National Review or John Stossel except they’d have added Coleman Young’s infamous “To attack Detroit is to attack black,” perhaps not unfairly.

    http://www.creators.com/opinion/john-stossel/stalled-motor-city.html

    “MSNBC host Melissa Harris-Perry — the same TV commentator who said Americans need to stop raising kids as if they belong to individual families — had an extraordinary explanation for why the city of Detroit sought to declare bankruptcy last week: not enough government.

    “This is what it looks like when government is small enough to drown in your bathtub, and it is not a pretty picture.” She says budget-cutting Republicans threaten to transform all of the U.S. into Detroit.

    What? Detroit has been a “model city” for big-government! All Detroit’s mayors since 1962 were Democrats who were eager to micromanage. And spend. Detroit has the only utility tax in Michigan, and its income tax is the third-highest of any big city in America (only Philadelphia and Louisville take more, and they aren’t doing great, either).

    Detroit’s automakers got billions in federal bailouts.

    The Detroit News revealed that Detroit in 2011 had around twice as many municipal employees per capita as cities with comparable populations. The city water and sewer department employed a “horseshoer” even though it keeps no horses.

    This is “small enough government”? Harris-Perry must have one heck of a bathtub.”

    Good stuff, people. There is hope.

Comments are closed.