By Raghuram Rajan

According to the conventional interpretation of the global economic recession, growth has ground to a halt in the West because demand has collapsed, a casualty of the massive amount of debt accumulated before the crisis. Households and countries are not spending because they can't borrow the funds to do so, and the best way to revive growth, the argument goes, is to find ways to get the money flowing again. Governments that still can should run up even larger deficits, and central banks should push interest rates even lower to encourage thrifty households to buy rather than save. Leaders should worry about the accumulated debt later, once their economies have picked up again.

This narrative -- the standard Keynesian line, modified for a debt crisis -- is the one to which most Western officials, central bankers, and Wall Street economists subscribe today. As the United States has shown signs of recovery, Keynesian pundits have been quick to claim success for their policies, pointing to Europe's emerging recession as proof of the folly of government austerity. But it is hard to tie recovery (or the lack of it) to specific policy interventions. Until recently, these same pundits were complaining that the stimulus packages in the United States were too small. So they could have claimed credit for Keynesian stimulus even if the recovery had not materialized, saying, "We told you to do more." And the massive fiscal deficits in Europe, as well as the European Central Bank's tremendous increase in lending to banks, suggest that it is not for want of government stimulus that growth is still fragile there.

In fact, today's economic troubles are not simply the result of inadequate demand but the result, equally, of a distorted supply side. For decades before the financial crisis in 2008, advanced economies were losing their ability to grow by making useful things. But they needed to somehow replace the jobs that had been lost to technology and foreign competition and to pay for the pensions and health care of their aging populations. So in an effort to pump up growth, governments spent more than they could afford and promoted easy credit to get households to do the same. The growth that these countries engineered, with its dependence on borrowing, proved unsustainable.

Rather than attempting to return to their artificially inflated GDP numbers from before the crisis, governments need to address the underlying flaws in their economies. In the United States, that means educating or retraining the workers who are falling behind, encouraging entrepreneurship and innovation, and harnessing the power of the financial sector to do good while preventing it from going off track. In southern Europe, by contrast, it means removing the regulations that protect firms and workers from competition and shrinking the government's presence in a number of areas, in the process eliminating unnecessary, unproductive jobs.

The End Of Easy Growth

To understand what will, and won't, work to restore sustainable growth, it helps to consider a thumbnail sketch of the economic history of the past 60 years. The 1950s and 1960s were a time of rapid economic expansion in the West and Japan. Several factors underpinned this long boom: postwar reconstruction, the resurgence of trade after the protectionist 1930s, more educated work forces, and the broader use of technologies such as electricity and the internal consumption engine. But as the economist Tyler Cowen has argued, once these low-hanging fruit had been plucked, it became much harder to keep economies humming. The era of fast growth came to a sudden end in the early 1970s, when the OPEC countries, realizing the value of their collective bargaining power, jacked up the price of oil.

As growth faltered, government spending ballooned. During the good years of the 1960s, democratic governments had been quick to expand the welfare state. But this meant that when unemployment later rose, so did government spending on benefits for the jobless, even as tax revenues shrank. For a while, central banks accommodated that spending with expansionary monetary policy. That, however, led to high inflation in the 1970s, which was exacerbated by the rise in oil prices. Such inflation, although it lowered the real value of governments' debt, did not induce growth. Instead, stagflation eroded most economists' and policymakers' faith in Keynesian stimulus policies.

Central banks then changed course, making low and stable inflation their primary objective. But governments continued their deficit spending, and public debt as a share of GDP in industrial countries climbed steadily beginning in the late 1970s -- this time without inflation to reduce its real value. Recognizing the need to find new sources of growth, Washington, toward the end of President Jimmy Carter's term and then under President Ronald Reagan, deregulated many industries, such as aviation, electric power, trucking, and finance. So did Prime Minister Margaret Thatcher in the United Kingdom. Eventually, productivity began to pick up.

Whereas the United States and the United Kingdom responded to the slump of the 1970s with frenetic deregulation, continental Europe made more cosmetic reforms. The European Commission pushed deregulation in various industries, including the financial sector, but these measures were limited, especially when it came to introducing competition and dismantling generous worker protections. Perhaps as a result, while productivity growth took off once again in the United States starting in the mid-1990s, it fell to a crawl in continental Europe, especially in its poorer and less reform-minded southern periphery. In 1999, when the euro was introduced, Italy's unemployment rate was 11 percent, Greece's was 12 percent, and Spain's was 16 percent. The resulting drain on government coffers made it difficult to save for future spending on health care and pensions, promises made even more onerous by rapidly aging populations.

In countries that did reform, deregulation was not an unmitigated blessing. It did boost entrepreneurship and innovation, increase competition, and force existing firms to focus on efficiency, all of which gave consumers cheaper and better products. But it also had the unintended consequence of increasing income inequality -- creating a gap that, by and large, governments dealt with not by preparing their work forces for a knowledge economy but by giving them access to cheap credit.

Disrupting The Status Quo

For the United States, the world's largest economy, deregulation has been a mixed bag. Over the past few decades, the competition it has induced has widened the income gap between the rich and the poor and made it harder for the average American to find a stable well-paying job with good benefits. But that competition has also led to a flood of cheap consumer goods, which has meant that any income he or she gets now goes further than ever before.

During the postwar era of heavy regulation and limited competition, established firms in the United States had grown fat and happy, enjoying massive quasi-monopolistic profits. They shared these returns with their shareholders and their workers. For banks, this was the age of the "3-6-3" formula: borrow at three percent, lend at six percent, and head off to the golf course at 3 PM. Banks were profitable, safe, and boring, and the price was paid by depositors, who got the occasional toaster instead of market interest rates. Unions fought for well-paying jobs with good benefits, and firms were happy to accommodate them to secure industrial peace -- after all, there were plenty of profits to be shared.

In the 1980s and 1990s, the dismantling of regulations and trade barriers put an end to this cozy life. New entrepreneurs with better products challenged their slower-moving competitors, and the variety and quality of consumer products improved radically, altering peoples' lives largely for the better. Personal computers, connected through the Internet, have allowed users to entertain, inform, and shop for themselves, and cell phones have let people stay in constant contact with friends (and bosses). The shipping container, meanwhile, has enabled small foreign manufacturers to ship products speedily to faraway consumers. Relative to incomes, cotton shirts and canned peaches have never been cheaper.

At the same time as regular consumers' purchasing power grew, so did Wall Street payouts. Because companies' profits were under pressure, they began to innovate more and take greater risks, and doing so required financiers who could understand those risks, price them accurately, and distribute them judiciously. Banking was no longer boring; indeed, it became the command center of the economy, financing one company's expansion here while putting another into bankruptcy there.

Meanwhile, the best companies became more meritocratic, and they paid more to attract top talent. The top one percent of households had obtained only 8.9 percent of the total income generated in the United States in 1976, but by 2007 this had increased to nearly 25 percent. Even as the salaries of upper management grew, however, its ranks diversified. Compared with executives in 1980, corporate leaders in the United States in 2001 were younger, more likely to be women, and less likely to have Ivy League degrees (although they had more advanced degrees). It was no longer as important to belong to the right country club to reach the top; what mattered was having a good education and the right skills.

It is tempting to blame the ever-widening income gap on skewed corporate incentives and misguided tax policies, but neither explanation is sufficient. If the rise in executive salaries were just the result of bad corporate governance, as some have claimed, then doctors, lawyers, and academics would not have also seen their salaries grow as much as they have in recent years. And although the top tax rates were indeed lowered during the presidency of George W. Bush, these cuts weren't the primary source of the inequality, either, since inequality in before-tax incomes also rose. This is not to say that all top salaries are deserved -- it is not hard to find the pliant board overpaying the underperforming CEO -- but most are simply reflections of the value of skills in a competitive world.

In fact, since the 1980s, the income gap has widened not just between CEOs and the rest of society but across the economy, too, as routine tasks have been automated or outsourced. With the aid of technology and capital, one skilled worker can displace many unskilled workers. Think of it this way: when factories used mechanical lathes, university-educated Joe and high-school-educated Moe were no different and earned similar paychecks. But when factories upgraded to computerized lathes, not only was Joe more useful; Moe was no longer needed.

Not all low-skilled jobs have disappeared. Nonroutine, low-paying service jobs that are hard to automate or outsource, such as taxi driving, hairdressing, or gardening, remain plentiful. So the U.S. work force has bifurcated into low-paying professions that require few skills and high-paying ones that call for creativity and credentials. Comfortable, routine jobs that require moderate skills and offer good benefits have disappeared, and the laid-off workers have had to either upgrade their skills or take lower-paying service jobs.

Unfortunately, for various reasons -- inadequate early schooling, dysfunctional families and communities, the high cost of university education -- far too many Americans have not gotten the education or skills they need. Others have spent too much time in shrinking industries, such as auto manufacturing, instead of acquiring skills in growing sectors, such as medical technology. As the economists Claudia Goldin and Lawrence Katz have put it, in "the race between technology and education" in the United States in the last few decades, education has fallen behind.

As Americans' skills have lagged, the gap between the wages of the well educated and the wages of the moderately educated has grown even further. Since the early 1980s, the difference between the incomes of the top ten percent of earners (who typically hold university degrees) and those of the middle (most of whom have only a high school diploma) has grown steadily. By contrast, the difference between median incomes and incomes of the bottom ten percent has barely budged. The top is running away from the middle, and the middle is merging with the bottom.

The statistics are alarming. In the United States, 35 percent of those aged 25 to 54 with no high school diploma have no job, and high school dropouts are three times as likely to be unemployed as university graduates. What is more, Americans between the ages of 25 and 34 are less likely to have a degree than those between 45 and 54, even though degrees have become more valuable in the labor market. Most troubling, however, is that in recent years, the children of rich parents have been far more likely to get college degrees than were similar children in the past, whereas college completion rates for children in poor households have stayed consistently low. The income divide created by the educational divide is becoming entrenched.

The Politicians Respond

In the years before the crisis, the everyday reality for middle-class Americans was a paycheck that refused to grow and a job that became less secure every year, even while the upper-middle class and the very rich got richer. Well-paying, low-skilled jobs with good benefits were becoming harder and harder to find, except perhaps in the government.

Rather than address the underlying reasons for this trend, American politicians opted for easy answers. Their response may be understandable; after all, it is not easy to upgrade workers' skills quickly. But the resulting fixes did more damage than good. Politicians sought to boost consumption, hoping that if middle-class voters felt like they were keeping up with their richer neighbors -- if they could afford a new car every few years and the occasional exotic holiday -- they might pay less attention to the fact that their salaries weren't growing. One easy way to do that was to enhance the public's access to credit.

Accordingly, starting in the early 1990s, U.S. leaders encouraged the financial sector to lend more to households, especially lower-middle-class ones. In 1992, Congress passed the Federal Housing Enterprises Financial Safety and Soundness Act, partly to gain more control over Fannie Mae and Freddie Mac, the giant private mortgage agencies, and partly to promote affordable homeownership for low-income groups.

Such policies helped money flow to lower-middle-class households and raised their spending -- so much so that consumption inequality rose much less than income inequality in the years before the crisis. These policies were also politically popular. Unlike when it came to an expansion in government welfare transfers, few groups opposed expanding credit to the lower-middle class -- not the politicians who wanted more growth and happy constituents, not the bankers and brokers who profited from the mortgage fees, not the borrowers who could now buy their dream houses with virtually no money down, and not the laissez-faire bank regulators who thought they could pick up the pieces if the housing market collapsed. Cynical as it may seem, easy credit was used as a palliative by successive administrations unable or unwilling to directly address the deeper problems with the economy or the anxieties of the middle class.

The Federal Reserve abetted these shortsighted policies. In 2001, in response to the dot-com bust, the Fed cut short-term interest rates to the bone. Even though the overstretched corporations that were meant to be stimulated were not interested in investing, artificially low interest rates acted as a tremendous subsidy to the parts of the economy that relied on debt, such as housing and finance. This led to an expansion in housing construction (and related services, such as real estate brokerage and mortgage lending), which created jobs, especially for the unskilled. Progressive economists applauded this process, arguing that the housing boom would lift the economy out of the doldrums. But the Fed-supported bubble proved unsustainable. Many construction workers have lost their jobs and are now in deeper trouble than before, having also borrowed to buy unaffordable houses.

Bankers obviously deserve a large share of the blame for the crisis. Some of the financial sector's activities were clearly predatory, if not outright criminal. But the role that the politically induced expansion of credit played cannot be ignored; it is the main reason the usual checks and balances on financial risk taking broke down.

Outside the United States, other governments responded differently to slowing growth in the 1990s. Some countries focused on making themselves more competitive. Fiscally conservative Germany, for example, reduced unemployment benefits even while reducing worker protections. Wages grew slowly even as productivity increased, and Germany became one of the most competitive manufacturers in the world. But some other European countries, such as Greece and Italy, had little incentive to reform, as the inflow of easy credit after their accession to the eurozone kept growth going and helped bring down unemployment. The Greek government borrowed to create high-paying but unproductive government jobs, and unemployment came down sharply. But eventually, Greece could borrow no more, and its GDP is now shrinking fast. Not all European countries in trouble relied on federal borrowing and spending. In Spain, a combination of a construction boom and spending by local governments created jobs. In Ireland, it was primarily a housing bubble that did the trick. Regardless, the common thread was that debt-fueled growth was unsustainable.

What Can Be Done?

Since the growth before the crisis was distorted in fundamental ways, it is hard to imagine that governments could restore demand quickly -- or that doing so would be enough to get the global economy back on track. The status quo ante is not a good place to return to because bloated finance, residential construction, and government sectors need to shrink, and workers need to move to more productive work. The way out of the crisis cannot be still more borrowing and spending, especially if the spending does not build lasting assets that will help future generations pay off the debts that they will be saddled with. Instead, the best short-term policy response is to focus on long-term sustainable growth.

Countries that don't have the option of running higher deficits, such as Greece, Italy, and Spain, should shrink the size of their governments and improve their tax collection. They must allow freer entry into such professions as accounting, law, and pharmaceuticals, while exposing sectors such as transportation to more competition, and they should reduce employment protections -- moves that would create more private-sector jobs for laid-off government workers and unemployed youth. Fiscal austerity is not painless and will probably subtract from growth in the short run. It would be far better to phase reforms in over time, yet it is precisely because governments did not act in good times that they are forced to do so, and quickly, in bad times. Indeed, there is a case to be made for doing what is necessary quickly and across the board so that everyone feels that the pain is shared, rather than spreading it over time and risking dissipating the political will. Governments should not, however, underestimate the pain that these measures will cause to the elderly, the youth, and the poor, and where possible, they should enact targeted legislation to alleviate the measures' impact.

The United States, for its part, can take some comfort in the powerful forces that should help create more productive jobs in the future: better information and communications technology, lower-cost clean energy, and sharply rising demand in emerging markets for higher-value-added goods. But it also needs to take decisive action now so that it can be ready to take advantage of these forces. The United States must improve the capabilities of its work force, preserve an environment for innovation, and regulate finance better so as to prevent excess.

None of this will be easy, of course. Consider how hard it is to improve the match between skills and jobs. Since the housing and financial sectors will not employ the numbers they did during the pre-crisis credit boom anytime soon, people who worked in, or depended on, those sectors will have to change careers. That takes time and is not always possible; the housing industry, in particular, employed many low-skilled workers, who are hard to place. Government programs aimed at skill building have a checkered history. Even government attempts to help students finance their educations have not always worked; some predatory private colleges have lured students with access to government financing into expensive degrees that have little value in the job market. Instead, much of the initiative has to come from people themselves.

That is not to say that Washington should be passive. Although educational reform and universal health care are long overdue, it can do more on other fronts. More information on job prospects in various career tracks, along with better counseling about educational and training programs, can help people make better decisions before they enroll in expensive but useless programs. In areas with high youth unemployment, subsidies for firms to hire first-time young workers may get youth into the labor force and help them understand what it takes to hold a job. The government could support older unemployed workers more -- paying for child care and training -- so that they can retrain even while looking for work. Some portion of employed workers' unemployment insurance fees could accumulate in training and job-search accounts that could help them acquire skills or look for work if they get laid off.

At the same time, since new business ventures are what will create the innovation that is necessary for growth, the United States has to preserve its entrepreneurial environment. Although the political right is probably alarmist about the downsides of somewhat higher income taxes, significantly higher taxes can reduce the returns for entrepreneurship and skill acquisition considerably -- for the rich and the poor alike. Far better to reform the tax system, eliminating the loopholes and tax subsidies that accountants are so fond of finding in order to keep marginal income tax rates from rising too much.

Culture also matters. Although it is important to shine the spotlight on egregious unearned salaries, clubbing all high earners into an undifferentiated mass -- as the "one percent" label does -- could denigrate the wealth creation that has served the country so well. The debate on inequality should focus on how the United States can level up rather than on how it should level down.

Finally, even though the country should never forget that financial excess tipped the world over into crisis, politicians must not lobotomize banking through regulation to make it boring again. Finance needs to be vibrant to make possible the entrepreneurship and innovation that the world sorely needs. At the same time, legislation such as the Dodd-Frank act, which overhauled financial regulation, although much derided for the burdens it imposes, needs to be given the chance to do its job of channeling the private sector's energies away from excess risk taking. As the experience with these new regulations builds, they can be altered if they are too onerous. Americans should remain alert to the reality that regulations are shaped by incumbents to benefit themselves. They should also remember the role political mandates and Federal Reserve policies played in the crisis and watch out for a repeat.

The industrial countries have a choice. They can act as if all is well except that their consumers are in a funk and so what John Maynard Keynes called "animal spirits" must be revived through stimulus measures. Or they can treat the crisis as a wake-up call and move to fix all that has been papered over in the last few decades and thus put themselves in a better position to take advantage of coming opportunities. For better or worse, the narrative that persuades these countries' governments and publics will determine their futures -- and that of the global economy.

 

(AUTHOR BIO: RAGHURAM RAJAN is Professor of Finance at the University of Chicago Booth School of Business and the author of Fault Lines: How Hidden Fractures Still Threaten the World Economy.)

The True Lessons of the Great Recession