Skip to content

Dismiss Macroeconomic Myths and Restore Accountability

At this juncture … the impact on the broader economy and financial markets of the problems in the subprime market seems likely to be contained. In particular, mortgages to prime borrowers and fixed-rate mortgages to all classes of borrowers continue to perform well, with low rates of delinquency…. The incoming data have supported the view that the current … policy is likely to foster sustainable economic growth.

Ben Bernanke, March 28, 2007

 The Federal Reserve is not currently forecasting a recession.

—Ben Bernanke, January 10, 2008

To sum up the story on the outlook for real GDP growth, my own view is that, under appropriate monetary policy, the economy is still likely to achieve a relatively smooth adjustment path, with real GDP growth gradually returning to its roughly 2½ percent trend over the next year or so, and the unemployment rate rising only very gradually to just above its 4¾ percent sustainable level.

—Janet Yellen, December 3, 2007

Statements from past and present Federal Reserve chairmen inspire little confidence that their models provide any understanding of what is going on within the United States, never mind the rest of the world. Yet the Fed continues to rely on the same analyses as it did before the financial crisis of 2008, and it is still led by some of the same people who were never held accountable for their miscalculations.

With the failure of central banks’ policies before 2008, followed by improvisation ever since, more introspection is needed. Perhaps the Fed, economists, and politicians should examine what led them so far astray. How could they not see the simplest facts before their eyes?

The bubbles and blunders of the last few decades are not attributable merely to modeling mistakes or human error. They suggest a deeper problem with contemporary economic theory, which has relied upon on aggregate data and academic models while ignoring the central importance of institutions that ensure capital is created and allocated accountably. In the developed world, accountability has been significantly weakened over the last several decades by policies drawing on Keynesian and other “macroeconomic” theories, which wear the mask of science but on closer inspection turn out to be more akin to astrology. As for the rest of the world, institutions that hold people accountable are mostly in infancy.

The failures of present-day macroeconomic thinking are not limited to the Left or the Right. Most leading economists have failed to foresee or understand the most important economic phenomena of our times. Even critiques of macroeconomic orthodoxy have tended to rely on its jargon, models, and underlying premises. Today, right- and left-wing economists battle over their respective charts and models, without realizing that their theories increasingly fail to take into account the realities of existing institutions. The effect has been spurious justifications for increased social and economic engineering, without regard for economic or political accountability. Voting has come to mean less and less as the flow of capital has become more and more centralized. Restoring accountability to its central place in sound economic policy and thinking should be the top priority. This requires revisiting the foundations of economic theory and the basis of capitalism.

Capital and Accountability

To prosper, talents must be matched with capital, holding all parties accountable: the diverse talents, the capital, and the “matchmakers.” This is far easier said than done. The difficulty—and the link to government spending—becomes immediately apparent once the meaning of the words in this misleadingly simple observation becomes clear.

Money becomes “capital” only when the “matchmakers”—whether banks, governments, international institutions, venture and other funds, angel investors, or family and friends—enter into contractual agreements that transfer the rights to spend the invested money to another entity in return for future incomes. That is what capital is: without a transfer of rights, no capital is created. Money that stays in a mattress is not “capital.”

The parties to the transfer must expect that the contracts are enforced, whether by the courts and police, or, as in our past and in traditional societies, religious institutions and customs. This is a description of what “capital” is and has nothing to do with ideology. Nobody knows whether such transfers will create “future incomes,” even if all parties are held accountable. These are bets on entrepreneurs, managers, and their teams.

In any society, there are only three ways of creating capital: (1) transferring families’ and friends’ savings among themselves (parents are the potential bankers furnished by nature), with a variety of arrangements to enforce accountability; (2) transferring savings and leveraging them through financial institutions, the latter having their own maze of arrangements to enforce accountability, and (3) transferring savings through governments (or NGOs).

The transfer of rights through financial intermediaries is always voluntary. Therefore, at some level, there is accountability. The transfer among family members and through governments may be either voluntary or not. In France, to this day, parents cannot completely disinherit their children. By contrast, in the United States, the creation of capital through family transfers is a voluntary matter.

The transfer of money through taxes, laws, and regulation varies widely by country. Perhaps only in the unique Swiss direct democracy is the creation of capital, when cash flows through the government, essentially “voluntary.” The Swiss have the right to vote in referenda on each major item of spending, regulation, and taxes, at the municipal, canton, and federal levels. In 2014, they also rejected an initiative that would have forced the country’s central bank to hold one-fifth of its assets in gold. When the Swiss recognize having made a mistake in approving a spending measure, a tax, or a regulation, they have the right to reverse the decision via initiatives. These unique features of Swiss democracy have not only dispersed power but also kept politicians at all levels accountable and their wings clipped.

On one end, there is this unique Swiss democracy, where all government actions are subject to referenda and in which the differences between private and public financing are minimized. On the other extreme, consider Russia, China, and some Eastern European, Latin American, or African countries, in which politicians almost totally determine the flow of cash, and the people have no powers. Never mind if people gain the right to vote: with capital markets closed and money only flowing through or directed by governments, constitutions become just pieces of paper—they do not mean a thing in terms of dispersing power.

Countries lacking institutions to create “capital” and collateral, without decentralized financial institutions and without independent courts, have little choice but to finance infrastructure and other spending by taxes and government borrowing. Whether such spending ends up building assets and future incomes or is spent “foolishly” is impossible to infer from aggregate numbers. Things that are measured—housing, bridges, roads and monuments, grandiose steel and cement factories—can all easily melt into the thin air of misuse and corruption.

When communism, confiscation, or inflation wipe out private savings and the countries have neither private financial institutions nor legal institutions expected to enforce transfers of rights, “giving power to the people” is a slogan without meaning. Venezuela is a prime example. Such societies may allow occasional general elections to preserve the veneer of democracy, but that veneer becomes increasingly farcical as the flow of credit becomes unaccountably centralized.

It is not the purpose of this essay to suggest solutions for “emerging” countries, or to estimate how long such transitions could take. I mention the above examples only to show why there cannot be model- and data-reliant general theories about “macroeconomic” policies that do not consider the state of accountability of political and other institutions.

Let us rather look not at extremes, but at the United States and Western European countries. Here, policymakers’ accountability to their constituents is increasingly ignored in deference to the muddled pronouncements of a priesthood of tenured economists.

The New Theories of the 1930s and Their Long Shadows

During the 1930s, “general theories” concerning government spending and financing came into being. These have legitimized centralized spending ever since, rationalized by “macroeconomic” models, jargon, and data. That these theories gained a widespread following at that time is not accidental. During these years, governments and central banks inadvertently destroyed the process of transferring rights, and the ability to create collateral and capital was drastically weakened. Unfortunately, academics and politicians in the United States and Western Europe still develop policies based on these only half-correct ideas from the 1930s, now masquerading as “science.” It is high time we discard the ideas and the jargon.

Economists and politicians still invoke John Maynard Keynes’s 1936 General Theory to justify government spending in general and incurring deficits and debts in particular. Never mind that Keynes retracted his views and jargon, admitting that he was mistaken and even calling his followers “fools.”

Keynes’s General Theory is neither “general” nor “theory,” but something of a mishmash, which stands in sharp contrast to his clearer writings before and after. Indeed, Keynes promised to rewrite it, but World War II and his sudden death soon after intervened. Economists later translated Keynes’s obscure prose into trivial mathematics, claiming that the solution of a few equations with a few unknowns offers guidance for all societies everywhere. Later the math became more complex but still was empty of content—“accountability” had disappeared from economists’ vocabulary. There are no political or legal institutions in these trivial exercises (for that matter, the General Theory does not have them either), which, in fact, sheds light on their success in academia, political circles, and the media.

After all, governments could legitimize policies centralizing powers by relying on such theories and the armies of superficial Ph.D.s in economics spouting them. Meanwhile commentators did not have to know anything except trivial math and the new jargon to pass it on. Niels Bohr, who distrusted mathematical models even in his field, made an accurate observation about academics coming up with arguments drawing solely on mathematical models: Oh, you are just being logical; you are not thinking.

However, bad theorizing does not imply that public spending was not the right policy during the 1930s. Relying on gravity is a good idea—even if one has never heard of Newton. It turns out that a sequence of events during the decades preceding Keynes’s jargon and analyses brought about the destruction of financial institutions and their ability to recreate capital and collateral. Keynes was right in suggesting that governments had to fill the void.

To offer one illustrative example, the United Kingdom experienced both deflation and high unemployment after Winston Churchill, chancellor of the exchequer, decided to relink the pound sterling in 1925 to the price of gold at its pre–World War I parity, in spite of the fact that the price level had doubled during the war, and the pound had fallen by sixty percent. One does not have to know any economics to realize that this abrupt political repricing would bring deflation, reduce exports, and increase unemployment. With long-term financial and wage contracts signed in nominal terms, their value jumped in a flash, compounding the severe impact of the war.

Churchill later admitted that this decision was his biggest blunder ever. But exchange rates make no appearance in Keynes’s General Theory, though Keynes discusses it elsewhere. Economists later attributed the U.K.’s problem to the gold standard, though it was Churchill’s political decision to reprice the pound at a mistaken level that brought on the disastrous outcomes. The usual fiscal tools of taxes or balancing budgets cannot compensate for such grave mistakes: on this point Keynes was right that traditional policies would not work at that time.

Correcting the mistake by rapid devaluation to the 1925 level of the pound might have been a better remedy than getting the government to become the main source of capital. The government would instead go on to make mazes of mistaken spending and borrowing decisions that were eventually rationalized by Keynes’s musings and new jargon about a “closed economy.” The U.K. reversed the 1925 mistake only in 1931, when it abandoned the gold standard and devalued the pound. By then, after much upheaval and ideological confusion, many observers did not even perceive Churchill’s 1925 decision as a “mistake,” rather believing that new theories and government programs were needed to stabilize societies.

Keynes attributed the “slump” to arbitrary fluctuations in people’s “animal spirits”—never contemplating the possibility that governments’ bad policies led to depressing such spirits in the first place. An inveterate elitist, Keynes assumed that only the hoi polloi were subject to “animal” instincts. The General Theory assumed that politicians and bureaucrats would make prudent and timely decisions—if only advised by wise economists such as himself. As noted, Keynes’s General Theory, dealing with an isolated, closed economy, said nothing about Churchill’s decision on exchange rates, the political climate, the impact of wildly increased tariffs by many countries during the 1930s, and the destruction of other countries’ financial institutions and private savings.

Instead, Keynes sought a rationale for government spending independent of the circumstances in which it occurred or the institutions that might exist to ensure accountability. Keynes noted that the success of government spending has a long history:

Pyramid-building, … even wars may serve to increase wealth, if the education of our statesmen on the principles of the classical economics stands in the way of anything better….

… Just as wars have been the only form of large-scale loan expenditure which statesmen have thought justifiable, so gold-mining is the only pretext for digging holes in the ground which has recommended itself to bankers as sound finance….

Ancient Egypt was doubly fortunate, and doubtless owed to this its fabled wealth, in that it possessed two activities, namely, pyramid-building as well as search for the precious metals…. The Middle Ages built cathedrals and sang dirges.

One implication of this strange view is that rulers or government officials can even spend money on themselves and their families, whether in this life or the afterlife. Corruption is better than doing nothing. No wonder dictators and centralized governments just love Keynesians.

Except Jargon, Nothing New Under the Sun

The gravely mistaken policies of the 1930s, as noted above, and others emanating from the badly structured Treaty of Versailles (1919), weakened Western capital markets and international trade. With savings, collateral, and private financial “matchmakers” destroyed, people turned to governments for rescue. But it would be misleading to conclude that the appeal to government spending as a solution had anything to do with what we now call Keynesian policies.

The historical rationale Keynes invokes might have achieved governments’ intended impact at the time—but not because such “infrastructure” spending had anything to do with unemployment rates. The impact related to poverty and inequality, terms that do not appear in subsequent Keynesian models and jargon. These were the relevant issues during the 1920s and 1930s, and they are relevant now.

For ages, the rationale for public spending on infrastructure was to maintain political stability, mitigating the incentives of the impoverished to riot, rebel, or commit crimes against property and the “establishment.” Such rationalization for government spending made sense in earlier times. Until well into the nineteenth century, there were few societies with sufficient private financial intermediaries, legal institutions, and other necessary arrangements to hold parties accountable to create “capital”—or even to justify the invention of the word. Governments had to be in the business of transferring rights and creating capital, while maintaining the poor and those falling behind, too, in order to preserve the peace. Bismarck stated explicitly that he decided to offer workers various benefits, social security among them—for the first time in Western Europe—so as to diminish the attraction of the “revolutionary ardor” that was sweeping through neighbouring European countries at the time.

Keynes’s statements on the pyramids or on digging holes in the ground actually echoed Sir William Petty’s recommendation to use tax money for the construction of pyramids on the Salisbury Plain for “entertainments, magnificent shows, triumphal marches”—predecessors of today’s building of sports arenas, or the work programs for young males in Canada and the United States during the Great Depression.

Sir William thought that it would be better if society employed the poor and paid them rather than merely offering charity. Even if the work consisted of bringing the “Stones of Stonehenge to Tower Hill,” he noted, it “could keep their minds to discipline and obedience, and their body to a patience of more profitable labours when need shall require them,” lest they lose the faculty of laboring. Today we would say that Sir William expected the government’s transfer payments to create “human capital” in the shape of “discipline” and “obedience.” His arguments are not different from debates about the long-term impacts of “welfare” vs. “workfare” in the 1930s, or the ones that led the Clinton administration to drastically change welfare laws in the late 1990s.

There was nothing novel even in Sir William’s observations. In his 1589 classic, The Reason of State, Giovanni Botero documented the existence of this view since at least ancient Greece: “Caesar, aspiring to the rule of his country, lent a hand to all who had fallen into dire need, either through debts, bad management, or other accidents. Since they had no reason to be happy, he thought them ripe for use in his project of overturning the republic…. To this end Augustus Caesar built extensively and exhorted the principal citizens to do the same, and in this way he kept the plebeian poor quiet.”

The resonance of this line of thinking today should be surprising. How is it that the United States, which has the deepest and most democratized financial markets in the world, should find itself in a position where government, through infrastructure spending and so forth, is perceived as the only effective source of capital in so many depressed areas?

To answer this question requires a clearer understanding of the failed policies of the past and the theories that legitimized them. As taxes and regulation compound to pay for well-intentioned if rarely thought through programs, the weakening of the matchmaking, capital-creating process prevents the creation and effective use of private-sector capital and jobs. Governments are then under the electorate’s pressure to increase spending and subsidies (requiring, in turn, additional tax revenues and more borrowing). And it is precisely our intellectual subservience to academic economists’ “macrostrology” that perpetuates this vicious cycle of failed policy. Instead of correcting policy mistakes by holding institutions more accountable, our economists, lost amid meaningless aggregate statistics, abstract models, and half-correct theory, conceive only of more aggressive social engineering.

Events Leading to the 2008 Crisis

Jonathan Rauch, among others, has argued that the United States has been in a state of “demosclerosis” since the 2000s. According to him, “the conventional wisdom is backward: The worrisome thing is not so much that American society is in the grip of its gridlocked government, but that American government is in the grip of powerful and broad changes,” chief among them the idea that governments are successful in solving problems through a superior ability to “reassign resources” under the direction of economists.

As discussed above, governmental involvement in matching people and capital may be necessary at times, particularly when providing “digging-holes-jobs” in certain locations might prevent the dissolution of communities. This is the kind of benefit that Sir William would approve of, and which Keynes may have been referring to when claiming that it would be better to pay to dig holes and fill them up rather than to have breadwinners sitting at home or mounting barricades. But these arguments are entirely absent in today’s macroeconomic, Keynesian framework. Rather, the historical idea that people’s digging holes (for homes, in particular) induces stability combined with abstract Keynesian theoretical prisms proves to be a lethal combination. This mode of analysis was one of the main factors that blinded Washington and the Fed from perceiving what was going on before the 2008 crisis and in its aftermath.

The road to the crash started with both the Clinton policy of drastic reduction in capital gains tax rates on housing in 1998 (but not on stocks or bonds) and legislation that forced banks to lend mortgages to “subprime” borrowers. This combination made housing a more “liquid” asset, becoming less “consumption” and more “investment,” with incentives to flip over more quickly. Rather than stabilizing neighbourhoods and helping poorer people get a “stake in the system,” the changes had a destabilizing effect, with the subprime borrowers ending up with titles, but no “stakes.”

An unintended consequence was the distortion of traditional price indexes under Fed Chairman Alan Greenspan, leading to his “conundrum” of home prices jumping and credit and capital expanding vastly—yet the measured inflation rate staying low. He mistook the expanding “capital accumulation” (due to the leveraged real estate expansion) of the financial sector and high home prices as signals of higher future incomes.

The Fed, statistics bureaus, and rating agencies failed to notice, too, that the failure of large increases in home prices to affect the customary calculation of inflation was a consequence of the new capital gains tax laws and bank regulations, since the weight given to “consumption” of homes (measured by “rents and rental equivalents”) in the price index now became too high. People were purchasing homes expecting tax-free capital gains, renting them at reduced prices (or not renting them at all), and flipping them quickly, leading to significant underestimation of the CPI (in which the rent component stayed at 30%). The Clinton administration had turned homes into “assets” rather than “consumption.” The inability of analysts to understand or adjust for the CPI effects of this change contributed to Greenspan’s complacency about the expansion of credit.

The U.S. went from subprime loans being 2% of total loans in 2002 to 30% of total loans in 2006. These loans were packaged into collateralized debt obligations (CDOs) with S&P rating 50% of the new securities as AAA in 2007 (in 2008, 55% of MBS issued from 2005 to 2007 were downgraded), and a bloated financial sector assisted by the ill-conceived roles of Fannie Mae and Freddie Mac. To give the benefit of the doubt, innocent ignorance may have been part of the rating agencies’ mistaken calculation. Rating agencies assumed that default rates on mortgages would stay at their historical level, in spite of the fact that the banks no longer performed due diligence but became marketers of the new financial instruments, backed by increasingly “subprime” borrowers.

Briefly: the combination of bad laws, bad inferences, Freddie and Fannie’s lack of accountability to anyone, badly designed incentives in the financial sector and rating agencies, the perception of events through Keynesian prisms of aggregates, and last but not least, most financial intermediaries’ failing to do due diligence—turned into a perfect storm. This resulted in mismatched credits and in aggregates rapidly losing their meaning and leading to mistaken conclusions. The mismeasured price indices, the “capital” accumulation (much of it real estate related, or leveraged by it), the vast expansion of both credit and the financial sector, and the mismeasured “high productivity” growth—all led to the inference that future real incomes were growing. The mistakes just kept compounding, though it all started with good intentions and a reasonable view that having a stake in real estate stabilizes communities. The problem was that “a stake” should mean people putting up some of their own hard earned money, rather than getting subsidized blank checks.

Avoiding the Same Mistakes in Publicly Financed Infrastructure

Increasing infrastructure spending now seems to be a bipartisan goal. President Trump campaigned on it, while Lawrence Summers, treasury secretary under President Clinton, has been among the most enthusiastic proponents of “deficit-financed infrastructure” to boost the speed of U.S. growth. According to Summers, this would ease bottlenecks and also give instant incentives for the private sector to expand. After all, Western governments can now borrow at unprecedented low rates.

Indeed, more government spending on infrastructure may well be a necessary and beneficial policy. A recent report by the Business Roundtable listed some of the bottleneck problems Summers alludes to, namely, that, by some measures, America’s infrastructure quality ranks sixteenth in the world, lagging behind Germany, France, and Japan; that nearly one in four U.S. bridges is structurally deficient or functionally obsolete; that urban highway congestion costs more than $100 billion annually and, on the waterways, port congestion, lock delays, and lack of facilities for larger ships adds billions to the cost of products annually.

However, in order to avoid the unintended inflation of another bubble—the typical outcome in recent times of such large-scale government interventions—debates over infrastructure spending cannot be left to the “macrostrologists.” Instead of debating infrastructure spending solely in terms of superficial math and aggregate statistics, the merits and impacts of specific projects must be considered, along with the question of how to ensure accountability.

To begin with, it is helpful to answer the question of why government involvement is needed in the first place. If it is so easy to spot these problems, why doesn’t the private sector deploy the capital to tackle them? In important respects, government involvement seems necessary mainly to correct past policy mistakes. The 2016 World Bank’s Doing Business report ranks 189 countries in terms of the ease of operating businesses in various sectors: in getting construction permits, the U.S. ranks 33rd, in electricity 44th, in getting trade moving across borders, 21st. Even the New York Times called Mr. Obama “Regulator-in-Chief.”

However, it is foolish to blame the Obama administration alone. In addition to the uncertainty such regulatory obstacles bring about, there has also been the issue of increased uncertainty due to approving spending, preventing the creation of collateral against which businesses were borrowing in the past. Here is one example: in July 2015, Congress failed to pass a long-term highway bill to make improvements in the nation’s transportation system. Instead of funding big infrastructure projects with a multiyear plan, as once was the norm in Washington, lawmakers passed a bill covering three months only—for the thirty-fourth time since 2009. The standard practice before has been to pass a highway bill of six years’ duration.

Economists usually prefer not to be bothered with the political process at all, believing that their models and data are an infallible source of wisdom. However, involving voters ensures at least some political accountability for government projects, and the crucial—if often ignored—questions concerning capital allocation on infrastructure are as much political as they are economic.

I do not know, for example, whether American voters want highway spending to have priority over other projects, such as deeper ports to accommodate the new deep-water ships that can now transit the Panama Canal. At present only two of the country’s fourteen major East Coast ports, Baltimore and Norfolk, are ready for such ships. Or do voters want the federal government to get out of these two businesses altogether?

I do not know either whether approving a long-term highway spending bill would induce more private creation of capital and more employment than, say, eliminating obstacles to financing deeper ports or healthcare. Highways would induce more production of trucks, whereas deeper ports would lead to building bigger ships, which, in turn, might create more investment in rails, and big boxes to upload and offload goods more efficiently.

Since there is now unused capacity in ports and in the truck and train fleets, what type of infrastructure should be financed initially is far from clear. With more car sharing (transporting cars in ships is major business), fewer books and CDs, and with the coming of 3-D printing, perhaps manufacturing will become more local and less ship capacity will be needed. With an aging population requiring more, and more expansive, health care, people might spend less on goods, bringing about less transportation. How do politicians and bureaucrats know where, when, and how much to spend or how to approve budgets within short periods of time? Why not have referenda on it?

The facts summarized above show that solutions to today’s problems are not simply matters of “more spending on infrastructure.” Questions such as the following must be answered: For how long are spending commitments made? Do they set up the right institutions and arrangements to hold all parties to this particular spending accountable? Is the proposed time horizon compatible with the time horizon private businesses need to finance their expansion? How secure are such political commitments? Is this a priority? Present political haggling and the weakened accountability prevent answering these questions, and, indirectly, prevent the creation of the collateral needed for the private businesses to expand.

Accountability and Monetary Policy

At one point, Lawrence Summers heavily criticized investment in housing due to the involvement of Fannie and Freddie. His point was simple. The government had declared Fannie and Freddie to be “private companies” but remained on the hook for their debt. They therefore had two masters, a situation which prevents accountability. I do not know why Summers does not raise the same objections to any other politically determined projects, or, for that matter, to the Fed’s equally dubious dual mandate. The Fed’s mandate has shifted from the technical one of maintaining price stability (for which it could be held accountable) to “unemployment,” which is heavily impacted by political and fiscal decisions and which instantly makes the Fed’s decisions political, and now also to the undefined goal of “financial stability.”

The definition of “financial stability” has not been thought through: central bankers now not only play with the idea of recommending 4% annual inflation in the name of “stability” (but consider 0.5% deflation a disaster) but never consider how such stability is compatible with the exchange rate fluctuations of 30–50% over short periods of time—when such contracts are the basis of international exchanges and commercial societies.

While it is true that hedging can reduce companies’ exposure to exchange rates, hedging is expensive and needs deep capital markets that most countries in the world do not have. As a result, smaller firms in most of the world cannot grow. How does that promote financial—or any—“stability”? Recall that governments invoked “stability,” too, when passing the ill-conceived reduction in capital gain taxes on real estate, when forcing banks to give loans to subprime borrowers, and when sustaining Fannie and Freddie—all in the name of the academic idea that home ownership “stabilizes” neighborhoods. Most forgot that “ownership” means putting up hard-earned money to get a stake in a home, and not getting a piece of paper without putting down even a penny.

The weakened accountability across the economy has also brought about a diminished (mistakenly called) “natural rate of interest.” Upon closer examination, there is nothing “natural” about it, though central bankers invoke this term frequently in discussing future policy.

In an August 2016 paper, John Williams, president and CEO of the San Francisco Fed, wrote that “the underlying determinants for these declines are related to the global supply and demand for funds, including shifting demographics, slower trend productivity and economic growth, emerging markets seeking large reserves of safe assets, and a more general global savings glut.”

The other side of a “global savings glut” is a “global decline in demand for investments.” But to say that somehow the first is a “determinant” of the decline is a big mistake—even by the low standards of today’s minimal economic knowledge. It is impossible to talk about the price of anything by just considering the supply, is it not? The demand should be looked at too.

“Investment” and the creation of capital are not determined in any predictable fashion by domestic or global “savings.” When governments strengthen accountability and give incentives to transfer rights from savers to pools of talent, the same amount of savings can be leveraged into creating more collateral and capital. The statement that central banks know what rate balances investment and savings (calling it “natural”) and can thus keep economies fully employed and inflation stable by looking at domestic aggregates does not hold up to scrutiny.

If economists want to examine this rate, they should refer to the many obstacles that prevent creating capital at this time. But this would put central banks smack in the middle of politics—which is not their mandate. For central banks to keep their distance from politics and fiscal policies, their mandate should be restricted to one thing that at least in principle they could do well (even if historically they have not): namely, maintaining price and exchange rate stability. Sticking to such a mandate would diminish the monetary obstacles to set up contracts (transfer of rights, that is), whether domestic or international, and speed up the creation of collateral and capital.

The perception of a lower “natural rate of interest” is a direct result of the weakening of accountability in economic and political institutions. When accountability is weakened, less capital can be created, whatever the savings. Moreover, the weakened institutions diminish the value of collateral. This diminishes confidence and investments. The consequence is no “paradox of savings” (more Keynesian nonsense) or “savings glut.” Savings increase to insure against rainier days, even though returns on it are diminished at the worst time. When the loss of confidence coincides with an aging population, the impact is even stronger.

Between age 55 and death, lower interest rates do not induce more risk-taking, since a loss is less likely to be recouped: it brings greater incentives for capital preservation. And businesses expecting such behavior from increasing segments of their older population—the savers—have lesser incentives to invest. There are ways to overcome the impact of such demographic changes and increase the “natural rate,” but changes in government policies are needed for that, not changes in central banks’ mandates. And though emerging countries have a young and growing population, the remedy there too would be to make their governments more accountable and force central banks to stick to a sustaining contractvalue mandate.

To summarize: the slower rate of growth, innovation, and productivity—which is the outcome of the diminished bets resulting from matchmaking between talent and capital—reflects the weakened accountability in countries around the world. Whereas in Western countries governments have put more obstacles in the way of making such bets, and failed to adjust to their aging population, the rest of the world kept atavistic institutions, fitting far smaller, tribal societies. The best central banks could do is to diminish monetary obstacles to the contractual agreements needed to transfer rights that create capital domestically and internationally. This means both sustaining stable price levels and negotiating international agreements to maintain stable exchange rates. Unless, as noted, extreme circumstances or the need to correct grave policy blunders force central banks to play fiscal roles and become an agency of the countries’ treasuries—which the Fed was between 1940 and 1951.

How Do You Legitimize Decisions?

When people have to deal with situations for which there are no precedents (for instance, a world population that grew from one to seven billion in a century), how do governments claim legitimacy for their decisions and authority to act? There cannot be any “science” for answering this question. Uniqueness defies “falsification”—which is what science is about. But societies must manage leaps into uncertainty.

Throughout history, people invented mazes of institutions to solve the issue of “legitimacy.” We may consider these days the outcome of casting lots, throwing dice, the Biblical “Urim and Thummim,” a matter of chance. But many societies believed that a spiritual power controlled the outcome, and the priesthood, held in esteem, had the exclusive right to throw these devices. Thus were decisions legitimized.

If this sounds “primitive,” consider examples of institutions that later generations have relied on to legitimize decisions. In ancient Greece, people flocked to oracles to resolve doubts and seek guidance in private and public affairs. No decision on engagement in war, on signing a treaty, or on enactment of law was made without oracular approval. Were decisions made based on oracles’ forecasting any better than those based on the throwing of dice or casting of lots?

Later, monarchs and governments relied on astrology for making decisions. For centuries, rulers perceived astrology as an exact science, and books presenting complex geometrical calculations about the position of stars claimed legitimacy for decisions based on them. The mathematical complexity, like “sacred” languages of religions in earlier times and the trivial algebra—no different from today’s macrostrology (macroeconomics, I mean)—helped sustain auras of credibility.

In England, from the time of Elizabeth I to that of William and Mary, the status of judicial astrology was well established. The most learned and the most noble did not hesitate to consult astrologers openly. In every town and village, astrologers were busy casting dates for prosperous journeys and for setting up enterprises (whether shops or the marching of the army), and rulers had their “councils of astrological advisers.”

History rhymes. There is nothing “scientific” about macro models guiding policy makers these days. They have all been repeatedly refuted, and even members of the high priesthood of this discipline, Larry Summers and Paul Krugman among them, are highly critical of the FRB/US (the macroeconomic model the Fed claims to use since 1996, though Greenspan appeared to disregard it). But governments created and still rely on institutions that subsidize academics to shape the perception that the decision-makers know what they are doing.

Governments consult economists in every municipality, county, state, country or, on monetary and global matters, central bankers and the IMF. Like astrology, macroeconomics wears the masks of science—models, numbers, complex equations, predictions, claims of forecasting, “professors of macroeconomics,” journals and books published by (heavily subsidized) academic presses, expanding national and international statistics bureaus that create and sustain legitimacy. Keynes and his followers are still widely celebrated. Yet, upon closer inspection, macroeconomics, like astrology, melts into thin air.

It is beyond comprehension, for example, why central bankers still believe that 2–4% yearly inflation rates represent “stability,” whereas a half percent deflation brings on instability and unmitigated disaster. What theory, what evidence, backs these views, if any? The answer is: none.

As to claims of pursuing policies to achieve “stability”: Fed presidents and economists like Paul Krugman favor the 3–4% magic inflation rates. Yet, if achieved, such rates reduce real wages over three years by about 10–12%, and over ten years in the 40% range—unless the contracts are renegotiated. How do such rates and renegotiations fit definitions of “stability”? Instead of raising questions, commentators swallow hook, line, and sinker central bankers’ and Nobel Prize winners’ PR—often written by themselves.

What facts back John Williams’s recommendations on more spending on education when the data show that 65% of black males do not graduate within six years of starting college, and white males do not do much better at 60% (for women the respective percentages are 43 and 35%), and some 50% drop out of high schools earlier? The numbers suggest that spending more on education does not create “assets” but, with the present institutional arrangements, creates liabilities instead. These liabilities take the shape of frustrated students, dropping out from whatever passes for “education” these days and who, based on these numbers, end up carrying student debt, backed by not much future earnings.

As of January 1, 2016, 43% of the roughly 22 million Americans with federal student loans weren’t making payments, and one in six borrowers (3.6 million), were in default on $56 billion in student debt. With plans to forgive the debt if recipients work for the government or NGOs, but not if they work in the private sector, can anyone explain how thus subsidizing more bureaucracy and more NGOs is a remedy to our problems? True, the inflated bureaucracies would perhaps diminish the (mismeasured) unemployment rate. But as more cash flows through governments and more credit is allocated directly or indirectly through them (in this case to “subprime” students), and unemployment and other aggregate numbers increasingly lose their meaning. Much good it did for Communist countries when they stated that their unemployment was measured to be 1% and that their economies were growing: the more the allocation of credit is centralized, the less meaning governments’ aggregate statistics have, be they about GDP, investment, or unemployment figures.

Writing in the Wall Street Journal in August 2016, Kevin Warsh, a previous governor of the Fed, reached a similar conclusion:

The economics guild pushed ill-considered new dogmas into the mainstream of monetary policy. The Fed’s mantra of data-dependence causes erratic policy lurches in response to noisy data. Its medium-term policy objectives are at odds with its compulsion to keep asset prices elevated. Its inflation objectives are far more precise than the residual measurement error. Its output-gap economic models are troublingly unreliable.… The groupthink gathers adherents even as its successes become harder to find. The guild tightens its grip when it should open its mind to new data sources, new analytics, new economic models, new communication strategies, and a new paradigm for policy.

He adds that the obstacle to drastically change policies is that the Fed has become “a general purpose agency of government”—though he fails to mention that this is what it was during the 1940s, too. Only then the Fed’s status was acknowledged, whereas now the Fed is carrying out fiscal policy, affecting the allocation of credit, and strongly impacting the distribution of wealth while claiming that its policies will induce people to take more risks. Since when is this the Fed’s mandate? And when has the Fed bureaucracy displayed anything like a sound understanding of human nature, history, and institutions so as to produce policies to induce “risk-taking”?

What can be done? Rather than relying on macroeconomic “science,” it would be far better to strengthen or develop institutions that could correct mistaken decisions faster. Proper political and private institutions that induce greater responsibility and accountability are key. They would reduce the scope and duration of mistakes and better ensure that talents are matched with capital.

This article originally appeared in American Affairs Volume I, Number 1 (Spring 2017): 62–81.

Note
This article draws on the author’s previous books and articles, where more detailed references can be found: Betting on Ideas (University of Chicago Press, 1985), Labyrinths of Prosperity (University of Michigan Press, 1994), The Force of Finance (Texere, 2002), A World of Chance, with Gabrielle A. Brenner and Aaron Brown (Cambridge University Press, 2008), “Treasury’s Little Buddy,” with Martin Fridson (International Economy, 2013), “Bernanke, Krugman, Summers: The High Priests of Macro-strology,” parts 1 and 2, Asia Times (2015).

 


Sorry, PDF downloads are available
to subscribers only.

Subscribe

Already subscribed?
Sign In With Your AAJ Account | Sign In with Blink