Increasing income inequality in the US was an indirect cause of the noughties housing bubble. As evidence of rising income inequality mounted in the 1990s the political response was to expand lending to households, especially low income ones. In 1992 the Clinton administration passed an Act that instructed HUD to develop affordable housing goals. This allowed partaking agencies, such as Fannie Mae and Freddie Mac to maintain lower capital-to-loan ratios than other regulated financial institutions. In 1995 Clinton increased the agency’s mandate for low-income lending from 42% to 50% of assets and wrote the following in a preamble to a HUD strategy document; “This past year I directed HUD…to boost home ownership in America to an all-time high by the end of this century”. The paper went on to say;
“The lack of cash available to accumulate the required down payment and closing costs is the major impediment to purchasing a home. Other households do not have sufficient income to meet the monthly payments on mortgages financed at market interest rates for standard loan terms. Financing strategies, fuelled by the creativity and the resources of the public and private sectors, should address both of these barriers to home ownership”.
The Bush administration further expanded such policies in 2004 when it increased the low income lending mandate on Fannie and Freddie to 56% of their assets. Edward Pinto, former Chief Credit Officer of Fannie Mae, estimates that in June 2008 US government sponsored agencies and programmes were exposed to $2.7 trillion of sub-prime loans, which accounted for 54% of the US loan market. This spiral of credit only perpetuated itself as rising house prices gave households and firms the collateral to increase their borrowing further.
The US housing market boom was also fuelled by inflows of foreign capital. Developing countries with a surplus to invest (most notably China) view the US as a safe investment haven. A further incentive to invest is that by sinking hundreds of billions of dollars in to US Treasury bonds the investing country’s currency value is depressed and its competitiveness in the world’s largest export market is maintained. As a huge pool of cheap capital demanded a safe method of investment in the US the size of the US financial sector doubled as a percentage of GDP between 1975 and 2007.
The scenario illustrated above poses the obvious question of why did the Federal Reserve maintain such low rates of interest during the noughties and show no concern or reaction to the conditions of cheap credit? Rajan identifies a trend of increasingly jobless recoveries in the US as a reason. The Federal Reserve has an explicit mandate to pursue maximum employment via the influence of monetary and credit conditions. This manifested in a willingness to perpetually stimulate the economy with low interest rates in the years following the collapse of the ‘tech-bubble’ in 2000. Interest rates of 1% between 2001 and 2004 helped cushion the bursting of the tech-bubble but contributed to the even larger bubble forming in the housing market.
While the Federal Reserve has a Keynesian mandate to use monetary and credit policy to pursue full employment, it maintains a neo-liberal view of its role in the identification and suppression of asset bubbles. Greenspan argues that “Instead of trying to contain a putative bubble by drastic actions with largely unpredictable consequences, we choose to focus on policies to mitigate the fallout when it occurs and hopefully ease the transition to the next expansion”. The Federal Reserve is therefore in a position where it must deal with the damage caused by bubbles but can do nothing to prevent them, and may even be contributing to their occurrence. If the Federal Reserve was relying on the power of the market to be self-correcting, then why did market participants not identify the danger of a growing asset bubble? Beyond the well worn debate surrounding the validity of the Efficient Market Hypothesis the credit crunch was subject to a particular villain in the piece; securitisation.
Securitisation, fuelled by the creativity and the resources of the public and private sectors, had reached bizarre levels of complexity with Collateralised Debt Obligations (CDOs). The problem with CDOs was that low grade securities were sliced in to tranches and the senior tranches were given a AAA rating, providing a substantial portion of sub-prime ‘frogs’ with the appearance of a AAA ‘prince’. Market transparency was further clouded by CDOs of CDOs (CDO2 and CDO3) and synthetic CDOs comprised of Credit Default Swaps. The result was a market flooded with ‘security sausage’; opaque, complex and illiquid securities, the risk of which was impossible to quantify . Warren Buffet, perhaps the world’s most high profile investor, identified such instruments as “financial weapons of mass destruction” in his 2002 annual letter to Berkshire Hathaway shareholders. Market participants who slept well in their beds at night having purchased AAA rated securities, only to awaken with missing shirts, may very well ask why the rating agencies had not reached a similar conclusion.
The answer is that rating agencies had ceased to act as referee on the investment field and had actually become participants in the game. Whereas Moody’s and Standard & Poor’s had previously enjoyed a comfortable market duopoly, in the 1990’s they came under serious competitive pressure from a third agency; Fitch. Another change was the move from rating only companies, which was relatively straightforward, to rating structured debt bonds which could be tailored with rating agency advice to achieve a desired rating. The result was that where once rating agencies had previously maintained independence from market participants, they had subsequently ‘gone native’ and diluted the quality and integrity of their service in the search for business. As one Moody’s Managing Director wrote to his superiors in 2007; the company’s errors made it look “either incompetent at credit analysis, or like we sold our soul to the Devil for revenue, or a little bit of both”.
The moral hazard endemic within financial firms was a major contributing factor to the credit crunch. In the securitisation chain a mortgage advisor issuing a ‘NINJA’ (no income, no job and no assets) mortgage received compensation but bore no consequence should the mortgage subsequently default. Similarly a trader who gambled with CDO’s would be rewarded handsomely if they succeeded but would rarely be punished if they failed. Moral hazard is magnified within the financial services industry by the annual cash bonus system that provides an incentive for short-term gains through excessive leverage and risk-taking, without regard to the longer-term consequences. Principal-agent issues are also amplified within the financial sector because it relies less on equity funding and more on borrowed money than other industries, therefore management has little concern when risking shareholder’s equity. The final firewall one might expect to keep financial institution’s moral hazard in check would be the lending institutions that put their money at risk in the banks. However, even these institutions were subject to moral hazard due to the industry wide perception that, post the Great Depression, central banks will always step in as lender of last resort in the event of a financial crisis (widely referred to as the ‘Greenspan put’). This was reinforced in 1998 with the Federal Reserve orchestrated bailout of Long Term Capital Management.
With moral hazard endemic throughout the financial market one might expect that government regulation would act as the last stop-gap to irresponsible behaviour. However, the years preceding the credit crunch saw a relentless drive towards deregulation. 1999 saw the repeal of the Glass-Steagall Act, which had ensured a firewall between commercial banking and riskier investment banking activities since the Great Depression. The last thirty years has also seen the emergence of what McCulley called the ‘shadow banking system’ which is subject to lesser capital requirements and does not have access to either Federal deposit insurance or lender-of-last-resort liquidity in the event of a bank run. Conventional banks increasingly embraced the shadow banking system as a means of regulatory arbitrage, placing many assets in off-balance-sheet vehicles that did not incur capital requirements. The risk in the shadow system was increased further still by hundreds of unregulated non-bank mortgage lenders operating a business model which relied heavily on short-term financing from larger banks to sustain itself.
Pop and Crunch
In 2006 the US housing bubble burst and thousands of sub-prime mortgages began to default. As a result the shadow banking system, including non-bank lenders and hedge funds, was denied the short-term lines of credit from larger banks on which it relied. Without a lender of last resort to fall back on it became subject to bank runs and by March 2007 over fifty non-bank lenders had declared bankruptcy. Two hedge funds run by Bear Stearns, which had sunk billions of dollars in to highly illiquid CDO’s, declared bankruptcy by the end of July 2007. As banks struggled to determine their own and other bank’s liabilities in the mire of ‘security sausage’, fear and panic began to take-hold. In August 2007 the LIBOR rose from ten basis points to seventy, signalling that liquidity in overnight money markets had largely dried up. Capital injections from sovereign wealth funds in the Middle East and Asia returned the market to a period of relative calm during the winter of 2007-2008. However, by March 2008 banks around the world had announced write-downs of $260 billion. The investment banks, highly leveraged and without access to Federal loan deposit insurance or lender-of-last resort liquidity, were in dire straights. In March 2008 Bear Stearns collapsed (sold at less than 10% of its 52 week high) and by the end of that year each of the ‘big five’ investment banks had either declared bankruptcy or lost their independence.
The central banks took an initially hard line on moral hazard but the frailty of this position was exposed as Bear Stearns collapsed. Although the Bear Stearns shareholders were effectively wiped out, the Federal Reserve agreed to assume most of the future losses tied to the firm’s toxic assets. To not do so would create the potential of triggering a derivatives failure with domino effects throughout the financial system; the banks were too big to fail. This moral hazard dilemma played out on a global scale during 2008.
By 2009 central banks across the world had pushed interest rates close to zero. However, making money available to banks did little to stimulate the wider financial system because the banks, petrified by uncertainty and contracting asset values on their balance sheets, refused to lend it.
Faced with financial paralysis and economic catastrophe, the central banks embarked on policy of ‘quantitative easing’ (i.e. creating new money) that they used to buy safe government debt, which they in turn traded for the toxic debt of financial institutions. In 2007 the Federal Reserve held $900 billion of debt, most of which was safe US government debt. By 2009 this had ballooned to $2.3 trillion, much of which was toxic. This removed the dead weight of debt which was exerting a paralysing effect on financial institutions, but in essence transferred a massive burden of toxic liabilities on to the tax paying public.
To understand the reason why governments took such unprecedented steps one must examine the unthinkable consequences of inaction. The financial system stood on the precipice of meltdown, which would have created similar consequences for the real economy. Banks had become too big and too interdependent to fail, and had the central banks not maintained liquidity in the financial system many major banks would have collapsed. Not only would depositors have lost their savings, modern electronic-transaction based commerce would have likely have suffered from an immediate seizure. Consumers would not have been able to make payment with their credit cards and direct debit payments would have ceased. The business-to-business checking system would have stalled and companies, without the overdraft facilities they rely on for immediate cash flow, would have closed. Trade would have ground to a halt as banks, without access to short-term finance, would have been unable to issue letters of credit. Should commerce and trade have effectively ceased the threat of civil unrest would have been very real. As the UK’s security services reportedly assume, society “is never more than four meals away from anarchy”.
The consequences of economic collapse would have been global for various reasons. The first is the simple reason that approximately half of the securitised loans created in the US had been sold to investors abroad. The second reason is that by virtue of the fact that the US is the world’s largest economy, many of the world’s international supply chains lead there. The third reason is that migrant workers in the US send a sizeable portion of their incomes back home; therefore a collapse in the US economy has a significant impact on the national incomes of other countries.
The scenarios outlined above all occurred to some extent as a result of the 2008 credit crunch. The US suffered a severe recession of 18 months duration (against the post-war average of 10 months) with an annualised contraction rate of 6.8% in Q4 2008. Some advanced economies were subject to even greater shock, with the UK, Germany and Japan suffering Q4 2008 annualised contraction rates of 8.4%, 8.8% and 11.6% respectively, However, Keynesian government intervention on an unprecedented scale was successful in alleviating the immediate threat of economic meltdown and mass unemployment.
Despite government success in averting potential disaster, there are those that would argue that government intervention was the root cause of the global financial crisis and that the cure may be worse than any crash. These views originated in the ‘Austrian school’, which today is generally tantamount to holding libertarian economic beliefs. They propose that government policies such as easy monetary policy, regulation and intervention interfere negatively with the workings of the free market and that the policy response to the credit crunch will eventually give us the worst of both worlds. Instead of letting weak overleveraged banks and corporations perish, thereby allowing the strong to survive in a burst of ‘creative destruction’, governments around the world have created an economy of the ‘living-dead’. The Austrian school would cite Japan’s decade of ‘zombie banks’ as an example of the living-dead holding back the living, the result being that productivity and secular growth declined. The Austrian school would argue that governments are only delaying the inevitable with their interventions and that a painful recession is required to purge the capitalist system of its dead wood and ensure a sustainable recovery.
Whatever one’s views on the short-term reaction to the recent crisis, the Keynesian die has now been cast. Beyond the free market neo-liberals, few would argue against taking preventative measures against a future credit crunch. In order to assess the success of government in approaching this task we must revisit the causes of the credit crunch identified earlier; government backed sub-prime lending, securitisation, ratings agencies, deregulation and the shadow banking system, moral hazard, foreign capital inflows and the Federal Reserve’s paradoxical approach to asset bubbles.
In 2011 the US Treasury and HUD reported to congress on their intentions for reform in the housing market. They recognise that Fannie and Freddie became caught up in the private sector’s pursuit of profit through excessive risk taking and propose that both institutions are ultimately wound down. However, no time scale is given for this winding down and the report sends alarmingly conflicting signals;
“Our plan…means access to credit for those Americans who want to own their own home, which has helped millions of middle class families build wealth and achieve the American Dream. And it means a helping hand for lower-income Americans, who are burdened by the strain of high housing costs…Securitization, alongside credit from the banking system, should continue to play a major role in housing finance…”
The above appears to echo the dangerously idealist sentiments of the 1995 HUD report and creates the potential that a continued political will for the democratisation of credit will foster future bubbles. The report envisages a continued role for securitisation, albeit with more “transparency, standardization, and accountability in the securitisation chain”. As the rules will only be finalised in 2011 and take effect in 2012 it is not possible to draw conclusions at this point. However, these “weapons of mass destruction” will require robust legislation if they are not to manifest a future threat to the financial system. The malleable nature of derivatives will always create potential for creative bankers to tailor their structure to exploit regulatory loopholes. Therefore a reformed and properly functioning rating agency industry is required in tandem with regulation if the securitisation threat is to be properly nullified.
Many proposals for reforming the rating agency business model have been made, including increasing the legal liability of the agencies, returning to the ‘investor pays’ model, developing a ‘common pool payment’ model, making investment ratings a monopoly enterprise and swapping the mechanistic reliance on ratings agencies for internal credit risk assessment practices. However, the only concrete change that has been implemented is the requirement for disclosure of information used to rate securities (2010 amendment to the US Exchange Act 1934). The pre-crisis rating agency status quo remains largely intact.
The US government has been more pro-active in prescribing regulatory reform, via the 2010 Dodd-Frank Act. This wide ranging bill of reform aims to promote robust supervision and regulation of financial firms, establish comprehensive supervision of financial markets, protect consumers and investors from financial abuse, provide government with the tools to manage financial crises and raise international regulatory standards. A key element to the Act is section 619 (widely known as the ‘Volcker Rule’), which bans banks from proprietary trading and owning large stakes in hedge funds and private equity firms. This goes some way to addressing the systemic risk created by what Volcker (2010) referred to as “casino banking” and is comparable with the implementation of the Glass-Steagall Act following the Great Depression. There are concerns regarding the implementation timeframe, as more than six years will elapse before restrictions on proprietary trading are completed. However, the early signs are encouraging, with JP Morgan Chase, Morgan Stanley and Goldman Sachs having all effectively wound down their proprietary trading operations well ahead of the deadline imposed by the Act.
One failure of the Dodd-Frank Act is that it did not deal with the problem of the extraordinary concentration of assets held by a small number of large financial institutions. It has therefore failed to address the moral hazard of banks that are ‘too big to fail’. In the event of another systemic crisis the Federal Reserve would have no choice but to come to the rescue. Bernanke instead points to regulatory measures that prevent the need for bail-outs and new Financial Stability Oversight Council’s power to break up institutions if they are deemed systemically risky. However, the ten largest US financial institutions already currently hold over 70% of US financial assets, up from 10% in 1990. It would require an extraordinary act of political will for a government to reverse this position, which is made all the more unlikely due to the fact it would require a co-ordinated international action to succeed. If the major financial centres did not act in unison, the banks would likely take flight for the centre that would let them continue in their current form.
The moral hazard inherent within banker’s compensation structures is also an issue that requires international co-ordination for similar reasons, and the early signs are encouraging. The CEBS has issued guidelines that 40-60% of bonus pay should be subject to deferral of at least three years and the European community appears unified in implementing this. The US, initially reluctant participate, is now proposing similar guidelines. However, the incentive issue is far from resolved; compensation in both US and EU investment banks accelerated from 2009 to 2010 . This illustrates that the banking sector has still developed little sense of moral obligation to the tax paying public to which it at least indirectly owes its survival. King has highlighted that banks are still behaving a manner that pursues profits “next week” with scant regard for longer-term consequences.
Monetary policy is one of the most effective weapons for preventing speculative bubbles, and prevention becomes increasingly crucial in the context of the “balance of financial terror” that the US find itself in regarding capital inflows from China. China cannot stop buying US debt because its biggest market would be liable to collapse, and the US cannot erect protectionist barriers because China would stop financing its profligate ways. This paper will not attempt to draw any conclusions on the current account deficit debate other than that the net inflow of foreign capital in to the US is unlikely to end any time soon. Despite this ongoing driver to the creation of asset bubbles, central banks maintain their argument against using monetary policy to prevent them. The argument is that the assumed instrument, interest rates, is too blunt. However, there are various alternative instruments the Federal Reserve could deploy. These include altering the margin requirements when leverage is employed to purchase securities (Regulation T) or altering the capital reserve ratios banks must hold against certain asset classes (Regulation D), which could constrain credit creation against asset classes such as real estate. Post-crisis, the central banks have made no indication that they will assume the role of identifying and suppressing potential asset bubbles, and they therefore remain in a paradoxical role of laissez-faire ‘bubble watching’ and Keynesian ‘bubble clean-up’.
The credit crunch resulted from the fall out of a massive asset bubble that formed within the US housing market. This bubble was fuelled by the cheap credit provided by government backed sub-prime lending, foreign capital inflows and persistently low central bank interest rates. The bubble was facilitated by opaque securitisation, underperforming rating agencies, deregulation, shadow banking and systemic moral hazard within the financial system.
When the US housing market collapsed the global financial system seized due to the balance sheets of financial institutions being flooded with opaque derivatives against which the risk was impossible to quantify. The threat of a wave of bankruptcies created a catastrophic risk to the mechanics of the real global economy, therefore government had little choice but to assume the toxic debts of financial institutions in order to inject liquidity and confidence in to the financial system. Government was successful in averting the immediate threat, but by bailing out overleveraged institutions it has created the potential that long-term productivity and growth may be stifled.
Government success in addressing the underlying causes of the credit crunch has been mixed. The US government has sent conflicting signals regarding it’s backing for socially driven lending. Securitisation regulation has yet to come in to force and the rating agency status quo remains intact. The government has prescribed regulation that should mitigate the threat of ‘casino-banking’ to the financial system and reduce the probability that a bailout will be required. However, the systemic moral hazard within the financial system has yet to be addressed. The banking sector has demonstrated little moral obligation to the taxpayer to which it owes its survival and regulatory reform has failed to break up institutions that are ‘too big to fail’.
Despite the net inflow of foreign capital appearing inevitable for the foreseeable future, the Federal Reserve maintains its laissez-faire attitude to suppressing asset bubbles in tandem with its Keynesian mandate to perpetually stimulate the economy in the pursuit of full employment. Therefore the balance of probabilities is that large asset bubbles will re-occur.
In summary, although regulatory reform has gone some way to reducing the probability of another global financial crisis the systemic moral hazard within the financial sector remains. In the event that another such crisis occurs government (and by proxy the taxpayer) remain in a position whereby they must underwrite failing financial institutions in order to mitigate the threat their failure poses to the real economy.