HOPE 51.3 (June 2019)

The June issue is a special issue on the history of macroeconometric modeling, with guest editors Marcel Boumans and Pedro Garcia Duarte.

Marcel Boumans and Pedro Garcia Duarte, "The History of Macroeconometric Modeling: An Introduction"

In the past two decades some of the history of economics turned from histories of economic ideas or economic thought, focusing on the study of “theories” and “schools of thought”, to histories of, for example, epistemic mediators such as “models”, “experiments”, “measurements”, and “observations.” The articles in this special issue show that a change of perspective in historical analysis toward the practice of macroeconometric modeling will enrich the history of macroeconomics. The history of macroeconomics is not only a history of ideas, but includes also histories of tools, especially macroeconometric models. These models are never built by one person, but require close cooperation of multiple teams, each from a specific discipline, while their workplaces are not necessarily located at universities. Unlike theories, a tool is designed and made for a specific purpose and have clients. This shift of focus from macroeconomic theories to macroeconometric models will perhaps give as a better understanding of the unfolding of modern economics.

Erich Pinzón-Fuchs, "Lawrence R. Klein and the Making of Large-Scale Macroeconometric Modeling, 1938–55"

Lawrence R. Klein was one of the most important figures in the collective development of macroeconometric modeling, a novel scientific practice that dominated macroeconomics throughout the first decades of the second half of the twentieth century. Understanding how Klein developed his identity as a macroeconometrician and how he forged a new scientific practice of macroeconometric modeling during 1938–55 is essential for drawing a clear picture of Klein’s importance in the creation and further development of macroeconometric modeling. Toward this aim, I focus on Klein’s early trajectory as a student of economics and an economist, and particularly examine the extent to which the people and institutions that Klein encountered helped him shape his own image of economics, his identity as an economist, and a new scientific practice in the United States. I describe Klein’s contribution as a new way of producing scientific knowledge that consisted in the construction and use of complex tools (macro-econometric models) within specific institutional configurations (econometric laboratories) used for explicit policy and scientific objectives in which the well-defined roles of experts (arranged in scientific teams) were embodied within a new scientific practice (macroeconometric modeling).

Roger E. Backhouse and Beatrice Cherrier, "The Ordinary Business of Macroeconometric Modeling: Working on the Fed-MIT-Penn Model, 1964–74"

The FMP model exemplifies the Keynesian models later criticized by Lucas, Sargent, and others as conceptually flawed. For economists in the 1960s such models were “big science”, posing organizational as well as theoretical and empirical problems. It was part of an even larger industry in which the messiness for which such models were later criticized was endorsed as enabling modelers to be guided by data while offering the flexibility needed to undertake policy analysis and to analyze the consequences of events. Practices that critics considered fatal weaknesses, such as intercept adjustments or fudging, were what clients paid for as the macro-econometric modeling industry went private.

Antonella Rancan, "Empirical Macroeconomics in a Policy Context: The Fed-MIT-Penn Model versus the St. Louis Model, 1965–75"

The Keynesian and Monetarist debate of the 1960s and 1970s has been mainly reconstructed starting from the contributions of leading Keynesian economists, on the one side, and Milton Friedman on the other. Even if the empirical character of the controversy has been recognized, the role played by macroeconometric models has been little investigated. This article enlarges the perspective looking at the interactions between monetary economists outside and within the Federal Reserve System, through the building of the Fed-MIT-Penn large-scale econometric model, and the St. Louis reduced-form model. The models’ empirical results were instrumental to theoretical and policy discussions, while the use of different statistical approaches involved methodological dispute and partly anticipated most of the issues central in the late 1970s.

Juan Acosta and Goulven Rubin, "Bank Behavior in Large-Scale Macroeconometric Models of the 1960s"

In this article we discuss the implementation of a portfolio choice framework and the inclusion of credit rationing by banks in several large-scale macroeconometric models built during the 1960s. We argue that the Fed-MIT-Penn model has a more transparent structure: the structure of the money market is clearer, as is the relationship of its equations with the microeconomic choices of banks. Regarding credit rationing, we found that modelers made important efforts to include it despite its nonobservable nature and to develop a measure of it. A succession of proxy variables was used and despite constant negative results modelers kept trying to find a place for credit rationing in their model. These results invite a deeper reflection on the idea of microfoundations in large-scale macroeconometric models and on the role of beliefs in macroeconometric modeling.

Hsiang-Ke Chao, "Inference to the Best Model of the Consumption Function"

Empirical tests in economics may not decisively confirm or disconfirm a theory. Three cases of studies of consumption function from the 1930s to the early 1970s discussed in this article show that we can interpret economists’ practices as inferences to derive the best model from the available evidence as well as from the competition among available models.

Ariane Dupont-Kieffer, "The Vatican Conferences of October 7–13, 1963: Controversies over the Neutrality of Econometric Modeling"

The conference organized by the Pontifical Academy of Sciences (hereafter PAS) in 1961 on “the role of econometrics in formulating development plans” represents a milestone in the work of Ragnar Frisch but also in the history of econometrics, challenging the more or less normative status of econometric models. The PAS Study Week is interesting in two respects: while (and because) econometrics is acknowledged as a “scientific approach of economic phenomena”, the PAS invited the community of econometricians to revisit the role and contribution of economics to social justice and welfare issues. Thereby, the article investigates how econometrics was anchored in a tension between being stated as a tool of knowledge defined within a reference to positivism on the one hand, and as a mean of changing society and creating a better world on the other. Three main issues of debate can be identified over the fifteen hundred pages of the proceedings: (1) the aim of econometric modeling (explaining and/or planning); (2) the scientific status of the model; and (3) the role of value judgment in the work of the econometrician in the practice of econometric modeling. The debates reveal that the question of building a “science” of economic phenomena is still intense thirty-two years after the birth of the Econometric Society. They help us understand what grounded the practice and ambition underlying the work of these econometricians and how they define and face the challenge of “neutrality” of both the model and their own practice.

Aurélien Goutsmedt, Erich Pinzón-Fuchs, Matthieu Renault, and Francesco Sergi, "Reacting to the Lucas Critique: The Keynesians' Replies"

In 1976, Robert Lucas explicitly criticized Keynesian macroeconometric models for their inability to correctly predict the effects of alternative economic policies. Today, most contemporary macroeconomists and some historians of economics consider that Lucas’s critique led forcefully to an immediate disqualification of the Keynesian macroeconometric approach. This narrative is based on the interpretation of the Lucas critique as a fundamental principle for economic reasoning that was (and still is) logically unquestionable. We consider that this narrative is problematic both in terms of historiography and the effects that it can have in the field as a way of assigning importance and credit to particular macroeconomists. Indeed, the point of view of the Keynesian economists is missing despite the fact that they were the target of Lucas’s paper and that throughout the 1970s and 1980s they produced a fierce reaction against it. In this article we analyze the reactions by a broad set of authors (which we label “Keynesians”) that disputed the relevance of the critique. In spite of their diversity in methodological, theoretical, and policy issues, these reactions were characterized by their common questioning of the empirical and practical relevance of the Lucas critique.

Boris Salazar and Daniel Otero, "A Tale of a Tool: The Impact of Sims's Vector Autoregressions on Macroeconometrics"

This article assesses the impact of Christopher Sims’s VARs upon the evolution of contemporary macroeconometrics within the contentious context of the new classical revolution. We argue that the decision of using VARs was not an all-or-nothing affair, but the outcome of the evolving interaction of tools, theories, and researchers within an overall process of learning by modifying. Using citation and cocitation networks, extracting algorithms and semantic networks, we found evidence that confirms the unfolding of an interdependent and collective evolution of the impact of Sims’s VARs, and the emergence of new groupings of most cocited articles at the interface of macroeconometrics and monetary policy analysis, revealing how the practice of macroeconometrics changed in the interim.

Aurélien Saïdi, "How Saline Is the Solow Residual? Debating Real Business Cycles in the 1980s and 1990s"

In a 1957 paper, Robert Solow exploited the mathematical properties of the aggregate production function to isolate the role of disembodied “technical change” in economic growth. Solow’s method allowed economists to disentangle the role of technical change from that of production factors, with the residual serving as a measure of total factor productivity growth. His method and results were met equally with praise and criticism. The interrogations around the residual gave rise to an abundant literature from the late 1950s which made it possible to improve the technique of calculation and refine the results. The Solow residual inspired a surge of interest (and criticism) in the 1980s when used by Finn Kydland and Edward Prescott to justify empirically the concept of technology shocks. In this paper, I argue that the resulting debates were not essentially different from those that took place—within the National Bureau of Economic Research already—in the 1950s and 1960s. Though they have been accompanied by a change in the “epistemic status of shocks” (Duarte and Hoover 2009, 228) in economics, which redesigned the Solow residual from a source of secular growth to be quantified to the initial impulse of short-term economic fluctuations. From then on, they entailed a choice among competing models of business cycles. I allege that the Solow residual, highly malleable and easily decomposable from both a theoretical and an empirical point of view, turned to be a clear-cut—although porous over time—demarcation line within the freshwater/saltwater spectrum (according to Hall’s 1976 metaphor) between economists who believed the business cycle was mostly driven by supply-side versus demand-side factors, and a formidable weapon in the battle that divided them.