I bet many readers of this blog have seen or heard of the remarkable piece of journalism by the Detroit Free Press on the financial mess that has engulfed the city those reporters cover. Tom Sugrue posted a link to the piece on facebook and, of course, he has also written a tremendous book on the decline cities, The Origins of the Urban Crisis. The research that went into the Freep piece struck many, including Tom, as an excellent example of historical methods being applied smartly and carefully to a contemporary problem. It was also an example of quantitative or empirical history–something I don’t do much of and something I would like to hear our readers weigh in upon.
In a piece for the Percolator blog at the Chronicle of Higher Education, Marc Parry quantifies the rise and fall and rise again of history by numbers. The upshot of the piece is that while “cliometrics” fell out of fashion by the 1990s, today “cliodynamics” is on the rise. With the apparent interest in digital humanities that many of us share (if somewhat cautiously and skeptically), I am not surprised that new ways of using, analyzing, and presenting data is remaking the perception of quantitative history.
I have posted something similar to this question before and I remember Tim Lacy pointing to Sarah Igo’s book, The Averaged American. But I am not asking about how historians have looked at the use of empirical knowledge but how they use it and how they think the use of it has changed with the development of digital history. In other words, will we see a renaissance of quantitative history that will overwhelm the somewhat negative connotation that field has had?