Thornton Holding Court.jpg

Thornton Hall

The Revolution Will Be Kuhnian.

Several Points On Today's Krugman, including the similarity in the manner in which the GOP and mainstream economics are both doomed.

A lot to say about Krugman's Points Of No Return.

First, more and more, Paul Krugman agreeing with my point that Reaganism is a series of claims about how the world works that turned out to be false:

But hard as it is to admit one’s own errors, it’s much harder to admit that your entire political movement got it badly wrong. Inflation phobia has always been closely bound up with right-wing politics; to admit that this phobia was misguided would have meant conceding that one whole side of the political divide was fundamentally off base about how the economy works.

Second, he does a great job of not using the word "ideology" incorrectly, ie, he doesn't use that word. Instead, notice "doctrine" and "dogma".

The answer is that like that ice sheet, his party’s intellectual evolution (or maybe more accurately, its devolution) has reached a point of no return, in which allegiance to false doctrines has become a crucial badge of identity. I’ve been thinking a lot lately about the power of doctrines — how support for a false dogma can become politically mandatory, and how overwhelming contrary evidence only makes such dogmas stronger and more extreme.

Right on! Others would incorrectly label the claim that QE by the Fed will cause inflation as "ideological" or "conservative". But there's nothing conservative about misunderstanding the economy. Being wrong is not part a system if ideas about small government.

Third--Way of the Whigs? This is Krugman's conclusion about the GOP:

As I said, the process of intellectual devolution seems to have reached a point of no return. And that scares me more than the news about that ice sheet.

Finally, everything that Krugman says about the doctrine and dogma of the Republican Party applies with equal force to Mainstream Economics:

Nobody likes admitting to mistakes, and all of us — even those of us who try not to — sometimes engage in motivated reasoning, selectively citing facts to support our preconceptions.

I've just finished reading Forecast: What Extreme Weather Can Teach Us About Economics, by Mark Buchanan. It's an very clear and readable explanation of why the economics version of "mathematical models" is completely useless. Despite decades of criticism, the field has turned out to be quite slippery, able to point to models that don't rely on whatever assumption is being ridiculed at the moment. But Forecast goes to the heart of the matter: equilibrium analysis, which is the "E" in "New Keynesian" DGSE models (favored by central bankers at the Fed and in Europe), the foundation of IS/LM ("old faithful" as far as Krugman is concerned), and the key to the efficient market hypothesis and rational expectations theory (the obsessions of the Freshwater UofC Crowd).

Weather and markets are both incredibly complex systems that have long defied human attempts at understanding and prediction. In both cases, we experience effects that are caused by the interaction of a set of independent variables that, while finite, is so large that the human brain can barely conceptualize it. Economics is further complicated by the reality that the independent variables have free will and change their behavior based on their beliefs about the market system we are trying to understand in the first place.

Given the sheer massiveness involved, breakthroughs in weather forecasting like the amazingly accurate predictions about the size, impact and path of Superstorm Sandy, were only made possible recently thanks to the exponential growth of computer processing power. It's not that the math behind the weather is particularly sophisticated, instead, it's the raw number of computations involved that meant success was impossible until we invented machines that could do lots and lots and lots of simple computations, very fast.

Unfortunately, the mathematics of econ didn't wait around for the invention of electric generators, let alone supercomputers, and so naturally relied on the resource available: the human brain, which can't match computers in terms of quantity but runs circles around them in terms of quality. In the 19th Century, John Stuart Mill looked at the problem of economic complexity and thought he could solve it with elegance. In so doing, Mill created the terrible methodology that, as modified by Milton Friedman, is still in use today. Mill, a mediocre intellectual if there ever was one, was doing his best, but the mistaken belief that his method produced valid conclusions has caused widespread suffering.

The Stanford Encylcopedia of Philosophy does a good job of telling the story in its entry on the philosophy of economics:

The first extended reflections on economic methodology appear in the work of Nassau Senior (1836) and John Stuart Mill (1836). Their essays must be understood against the background of the economic theory of their times. Like Smith's economics (to which it owed a great deal) and modern economics, the “classical” economics of the middle decades of the 19th century traced economic regularities to the choices of individuals facing social and natural constraints. But, as compared to Smith, more reliance was placed on severely simplified models. David Ricardo's Principles of Political Economy (1817), draws a portrait in which wages above the subsistence level lead to increases in the population, which in turn require more intensive agriculture or cultivation of inferior land. The extension of cultivation leads to lower profits and higher rents; and the whole tale of economic development leads to a gloomy stationary state in which profits are too low to command any net investment, wages return to subsistence levels, and only the landlords are affluent.

Fortunately for the world, but unfortunately for economic theorists at the time, the data consistently contradicted the trends the theory predicted (de Marchi 1970). Yet the theory continued to hold sway for more than half a century, and the consistently unfavorable data were explained away as due to various “disturbing causes.” It is consequently not surprising that Senior's and Mill's accounts of the method of economics emphasize the relative autonomy of theory.

Mill distinguishes between two main kinds of inductive methods. The method a posteriori is a method of direct experience. In his view, it is only suitable for phenomena in which few causal factors are operating or in which experimental controls are possible. Mill's famous methods of induction provide an articulation of the method a posteriori. In his method of difference, for example, one holds fixed every causal factor except one and checks to see whether the effect ceases to obtain when that one factor is removed.

Mill maintains that direct inductive methods cannot be used to study phenomena in which many causal factors are in play. If, for example, one attempts to investigate whether tariffs enhance or impede prosperity by comparing the prosperity of nations with high tariffs and nations without high tariffs, the results will be worthless because the prosperity of the countries studied depend on so many other causal factors. So, Mill argues, one needs instead to employ the method a priori. Despite its name, this too is an inductive method. The difference between the method a priori and the method a posteriori is that the method a priori is an indirect inductive method. Scientists first determine the laws governing individual causal factors in domains in which Mill's methods of induction are applicable. Having then determined the laws of the individual causes, they investigate their combined consequences deductively. Finally, there is a role for “verification” of the combined consequences, but owing to the causal complications, this testing has comparatively little weight. The testing of the conclusions serves only as a check on the scientist's deductions and as an indicator of whether there are significant disturbing causes that scientists have not yet accounted for.

Mill gives the example of the science of the tides. Physicists determine the law of gravitation by studying planetary motion, in which gravity is the only significant causal factor. Then physicists develop the theory of tides deductively from that law and information concerning the positions and motions of the moon and sun. The implications of the theory will be inexact and sometimes badly mistaken, because many subsidiary causal factors influence tides. Scientists can by testing the theory uncover mistakes in their deductions and evidence concerning the role of the subsidiary factors. But because of the causal complexity, such testing does little to confirm or disconfirm the law of gravitation, which has already been established. Although Mill does not often use the language of “ceteris paribus”, his view that the principles or “laws” of economics hold in the absence of “interferences” or “disturbing causes” provides an account of how the principles of economics can be true ceteris paribus (Hausman 1992, ch. 8, 12).

Because economic theory includes only the most important causes and necessarily ignores minor causes, its claims, like claims concerning tides, are inexact. Its predictions will be imprecise, and sometimes far off. Mill maintains that it is nevertheless possible to develop and confirm economic theory by studying in simpler domains the laws governing the major causal factors and then deducing their consequences in more complicated circumstances. For example, the statistical data are ambiguous concerning the relationship between minimum wages and unemployment of unskilled workers; and since the minimum wage has never been extremely high, there are no data about what unemployment would be in those circumstances. On the other hand, everyday experience teaches economists that firms can choose among more or less labor-intensive processes and that a high minimum wage will make more labor-intensive processes more expensive. On the assumption that firms try to keep their costs down, economists have good though not conclusive reason to believe that a high minimum wage will increase unemployment.

In defending a view of economics as in this way inexact and employing the method a priori, Mill was able to reconcile his empiricism and his commitment to Ricardo's economics. Although Mill's views on economic methodology were challenged later in the nineteenth century by economists who believed that the theory was too remote from the contingencies of policy and history (Roscher 1874, Schmoller 1888, 1898), Mill's methodological views dominated the mainstream of economic theory for well over a century (for example, Cairnes 1875). Mill's vision survived the so-called neoclassical revolution in economics beginning in the 1870s and is clearly discernable in the most important methodological treatises concerning neoclassical economics, such as John Neville Keynes' The Scope and Method of Political Economy (1891) or Lionel Robbins' An Essay on the Nature and Significance of Economic Science (1932). Hausman (1992) argues that current methodological practice closely resembles Mill's methodology, despite the fact that few economists explicitly defend it.

Have economists improved on Mill's methodology? Compare Noah Smith earlier this week in Quartz, describing the new and improved modern method:

Back in the age when economic data was very hard to gather, all you could really do was sit around and philosophize about how people might behave. A lot of useful stuff came out of that philosophizing, but a lot of non-useful stuff came out of it too. Now, thanks to the information age and the tidal wave of data, it’s becoming possible to see what works and what doesn’t in many arenas.

As opposed to the old Mill version described above:

 Scientists first determine the laws governing individual causal factors in domains in which Mill's methods of induction are applicable. Having then determined the laws of the individual causes, they investigate their combined consequences deductively. Finally, there is a role for “verification” of the combined consequences, but owing to the causal complications, this testing has comparatively little weight. The testing of the conclusions serves only as a check on the scientist's deductions and as an indicator of whether there are significant disturbing causes that scientists have not yet accounted for.

What Smith calls "seeing what works" and Mill calls "verification" are the exact same thing. If there's been an advance it's been one of emphasis, at most.

But economists don't seem to believe that they are using a method developed in the 1800s. Why not?

Because they, almost universally, believe that Milton Friedman fixed everything in 1953. You know the UofC crowd loves Milton Friedman, but what do liberals like Paul Krugman think of him? Here's Krugman in the February 15, 2007, New York Review of Books piece, "Who Was Milton Friedman?":

And just to be clear: although this essay argues that Friedman was wrong on some issues, and sometimes seemed less than honest with his readers, I regard him as a great economist and a great man.

Friedman, in Krugman's view, was a great economist because of his defense and use of the concept of Homo economicus:

The hypothetical Economic Man knows what he wants; his preferences can be expressed mathematically in terms of a “utility function.” And his choices are driven by rational calculations about how to maximize that function: whether consumers are deciding between corn flakes or shredded wheat, or investors are deciding between stocks and bonds, those decisions are assumed to be based on comparisons of the “marginal utility,” or the added benefit the buyer would get from acquiring a small amount of the alternatives available.

When Krugman says "expressed mathematically" he is talking about equilibrium analysis: several equations with interlocking variables are "solved" for the set of values where everything (e.g., prices, utility, trade, unemployment, profits, GDP, etc.) is in perfect balance with no need to change in any direction. Why does Krugman think we should imagine such a silly hypothetical actor and such an obviously unlikely balance?

The answer is that abstraction, strategic simplification, is the only way we can impose some intellectual order on the complexity of economic life. And the assumption of rational behavior has been a particularly fruitful simplification.

And who gets the credit for coming up with that criterion of validity?

Friedman, who argued in his 1953 essay “The Methodology of Positive Economics” that economic theories should be judged not by their psychological realism but by their ability to predict behavior. And Friedman’s two greatest triumphs as an economic theorist came from applying the hypothesis of rational behavior to questions other economists had thought beyond its reach.

But Krugman's uncritical version of this story is incomplete. We return to the Stanford Encyclopedia of Philosophy (with my emphasis):

Friedman begins his essay by distinguishing in a conventional way between positive and normative economics and conjecturing that policy disputes are typically really disputes about the consequences of alternatives and can thus be resolved by progress in positive economics. Turning to positive economics, Friedman asserts (without argument) that correct prediction concerning phenomena not yet observed is the ultimate goal of all positive sciences. He holds a practical view of science and looks to science for predictions that will guide policy.

Since it is difficult and often impossible to carry out experiments and since the uncontrolled phenomena economists observe are difficult to interpret (owing to the same causal complexity that bothered Mill), it is hard to judge whether a particular theory is a good basis for predictions or not. Consequently, Friedman argues, economists have supposed that they could test theories by the realism of their “assumptions” rather than by the accuracy of their predictions. Friedman argues at length that this is a grave mistake. Theories may be of great predictive value even though their assumptions are extremely “unrealistic.” The realism of a theory's assumptions is, he maintains, irrelevant to its predictive value. It does not matter whether the assumption that firms maximize profits is realistic. Theories should be appraised exclusively in terms of the accuracy of their predictions. What matters is whether the theory of the firm makes correct and significant predictions.

As critics have pointed out (and almost all commentators have been critical), Friedman refers to several different things as “assumptions” of a theory and means several different things by speaking of assumptions as “unrealistic” (Brunner 1969). Since Friedman aims his criticism to those who investigate empirically whether firms in fact attempt to maximize profits, he must take “assumptions” to include central economic generalizations, such as “Firms attempt to maximize profits,” and by “unrealistic,” he must mean, among other things, “false.” In arguing that it is a mistake to appraise theories in terms of the realism of assumptions, Friedman is arguing at least that it is a mistake to appraise theories by investigating whether their central generalizations are true or false.

It would seem that this interpretation would render Friedman's views inconsistent, because in testing whether firms attempt to maximize profits, one is checking whether predictions of theory concerning the behavior of firms are true or false. An “assumption” such as “firms maximize profits” is itself a prediction. But there is a further wrinkle. Friedman is not concerned with every prediction of economic theories. In Friedman's view, “theory is to be judged by its predictive power for the class of phenomena which it is intended to explain” (1953, p. 8 [italics added]). Economists are interested in only some of the implications of economic theories. Other predictions, such as those concerning the results of surveys of managers, are irrelevant to policy. What matters is whether economic theories are successful at predicting the phenomena that economists are interested in. In other words, Friedman believes that economic theories should be appraised in terms of their predictions concerning prices and quantities exchanged on markets. In his view, what matters is “narrow predictive success” (Hausman 2008a), not overall predictive adequacy.

So economists can simply ignore the disquieting findings of surveys. They can ignore the fact that people do not always prefer larger bundles of commodities to smaller bundles of commodities. They need not be troubled that some of their models suppose that all agents know the prices of all present and future commodities in all markets. All that matters is whether the predictions concerning market phenomena turn out to be correct. And since anomalous market outcomes could be due to any number of uncontrolled causal factors, while experiments are difficult to carry out, it turns out that economists need not worry about ever encountering evidence that would disconfirm fundamental theory. Detailed models may be confirmed or disconfirmed, but fundamental theory is safe. In this way one can understand how Friedman's methodology, which appears to justify the eclectic and pragmatic view that economists should use any model that appears to “work” regardless of how absurd or unreasonable its assumptions might appear, has been deployed in service of a rigid theoretical orthodoxy. For other discussions of Friedman's essay, see Bear and Orr 1969, Boland 1979, Hammond 1992, Hirsch and de Marchi 1990, Mäki 1990a, Melitz 1963, Rotwein 1959, and Samuelson 1963.

Over the last two decades there has been a surge of experimentation in economics, and Friedman's methodological views probably do not command the same near unanimity that they used to. But they are still enormously influential, and they still serve as a way of avoiding awkward questions concerning simplifications, idealizations, and abstraction in economics rather than responding to them.

Finally, here's Buchanan in Forecast:

It is ironic that in the mid-fifties, just as meteorology was being set free by an appreciation of instability and its role in creating complexity, economics was busy binding itself into a rigid framework of equilibrium thinking. Arrow and Debreu’s 1954 proofs of the “theorems of welfare economics” led several generations of economists to interpret economic reality through equilibrium concepts, despite later studies—primarily the works of Sonnenschein, Debreu, and Mantel, but also of others—casting doubt on their relevance to any real economy. Since then many if not most economists have gone right on as if those negative results were never published. Milton Friedman once remarked that he didn’t worry about the results because “the study of the stability of general equilibrium is unimportant … because it is obvious that the economy is stable.” Because of this attitude, economics today stands where atmospheric science was in around 1920 or so—trying to force disequilibrium puzzles into the ill-shaped framework of equilibrium thinking.

You Can't Win Darth, er, GOP...

Thinking Out Loud