Book Excerpt: Managing Extreme Financial Risk

Published by GARP on NOVEMBER 19, 2013

A series of catastrophic events in recent years have sensitized corporations, governments and other institutions to the low-probability, high-impact threats known in the risk management profession as tail risks. These events have been occurring with increasing frequency in the form of both natural and human-caused disasters. The financial kind is the focus of veteran risk manager and adviser Karamjeet Paul in the recently published Managing Extreme Financial Risk: Strategies and Tactics for Going Concerns. He notes that the crisis of 2008-’09 was but the latest — and certainly the most costly and far-reaching — of a modern progression that included the stock market crash of 1987, the savings and loan failures of the early 1990s, Long Term Capital Management and the Asian debt collapse in the late 1990s, and the Internet bubble shortly thereafter. Given this established, albeit unpredictable, pattern of mounting costs and concerns, Paul explains how conventional modeling is ill-suited for tail risks and lays out a framework for managing them that he defines as “sustainability management,” taking into account the systemic and operational consequences of extreme events that threaten not only a firm’s profitability, but also its very existence.

The following article is adapted from chapters of Managing Extreme Financial Risk (published in September by Academic Press) covering the need for a distinct, dedicated approach to tail risk and why traditional approaches fall short.

In the five years since the last financial crisis, much has been done to avoid repeating the experience. However, despite these well-meaning measures, the fundamental approach to managing extreme tail risk, which led to the failure or near-failure of institutions caught off guard in 2008, has not changed.

Extreme tail risk has always been the bane of a financial institution’s revenue model. It needs to be managed and addressed explicitly and distinctly from traditional risk management and capital management.

Financial institutions operate only in the context of known risk that can be priced. However, uncertainty management needs to focus beyond the pricing of risk, as the actual results are determined by how events and market conditions unfold. If actual results turn out to be favorable, there are larger profits than expected; if they turn out to be unfavorable, then there is an erosion of profits or even a net loss that must be offset by profits from other transactions.

However, conditions may turn out to be so unfavorable that large losses exceed profits from other transactions and erode the capital of the institution; in an extreme case, catastrophic losses may arise that exceed the capital and thus threaten the going-concern sustainability of the institution. So, how uncertainty is dealt with determines how an institution may fare during an extreme crisis.

Uncertainty that gives rise to risk can be divided into two components: quantifiable uncertainty and unquantifiable uncertainty.

Quantifiable uncertainty, managed effectively, drives the revenue model; unquantifiable uncertainty, if not managed effectively, threatens a going concern.

Quantifiable Uncertainty
By definition, in dealing with quantifiable uncertainty, expected loss values can be defined, and thus risk premium can be derived. Therefore, the revenue model is structured in such a way as to earn an adequate risk premium, or revenues over time to absorb expected loss values. And if risk premium falls short, then other operational means may be used to mitigate the shortfall to yield the required profit target. This means that expected losses and exposure from quantifiable uncertainty can be mitigated and/or priced in the normal course of running a business, and are anticipated to be absorbed by the risk premium or the revenue stream over time.

This makes risk premium the primary loss-absorbing agent, or the driver of protection against expected losses. If the risk premium is priced properly and the underlying models are sound, there will be adequate revenues to cover the expected losses from quantifiable uncertainty. As a result, the objective in managing exposure from quantifiable uncertainty is to structure (through the risk-reward relationship), preserve (through subsequent business actions), and protect (through controls) revenues and profits from risk to generate adequate risk premium.

This is the objective of traditional risk management, with a goal to leverage the uncertainty to maximize revenues, while mitigating and absorbing expected losses.

Unquantifiable Uncertainty
There is a small segment of the risk/exposure spectrum where uncertainty cannot be defined and quantified and can give rise to unexpected losses. In addition to being unexpected, such losses can also be so huge as to be catastrophic in an extreme situation and obviously so large that they cannot be priced into transactions.

The combination of their unexpected nature and potential enormity makes it difficult, if not impossible, to mitigate such losses in the normal course of running a business. Unless specific steps are taken, the only defense against unexpected losses is the capital of the institution. This is the defined purpose of the capital and why regulators place strong emphasis on its adequacy.

However, in extreme scenarios, unexpected losses can exceed the capital, and thus threaten the survival of the institution. Therefore, in relation to risk from unquantifiable uncertainty, the survival depends on how well the capital is protected and preserved to maintain the going-concern sustainability.

This makes capital the primary driver of defense against unexpected losses from unquantifiable uncertainty. As a result, the objective in managing unquantifiable uncertainty is to preserve and protect capital from risk. This is the objective ofsustainability management(also referred to here as tail-risk management), with a goal to leverage resources to mitigate and absorb unexpected losses in such a way that the capital is always protected and preserved. With prudent guidelines, limits, and proper controls, this goal drives the going-concern sustainability of the institution.

Risk management and sustainability management have such different objectives and motivations that one cannot be extended to manage the other. Risk management drives the revenue engine, while sustainability management preserves the going concern. Relying on risk management to manage going-concern sustainability is a disaster waiting to happen, as shown by the events of 2008.

Distinct Focus on Sustainability
Risk management is based upon the theory of probability, whereas sustainability management focuses on ensuring survival. Should a firm depend upon probabilities to ensure survival?

An analogy may help address this key issue.

Would you provide for the protection of your child’s life on the basis of probability theory, cost-benefit analysis or expected values? Most people would agree that it is dysfunctional to think of a child’s life in such terms. Similarly, to manage the sustainability of a going concern in terms of probabilities, expected values and cost-benefit tradeoffs is not just dysfunctional; it is also imprudent. This is the primary reason a fundamentally different approach is needed.

Issues related to risk management (to deal with quantifiable uncertainty) and those related to sustainability management (to deal with unquantifiable uncertainty) are starkly different.

  • Risk management has a financial objective, while sustainability management is about the life and death of the company.
  • The cost of being wrong in relation to risk management is the loss of profits, whereas the cost of being wrong in relation to sustainability management could be the company’s death.
  • Risk management deals with the human dimension that employs the ability to benefit from uncertainty, while sustainability management needs to cater to the desire for protection from uncertainty.
  • Risk management employs cost-benefit judgment for decision criteria; what should be the decision criteria for sustainability management of a going concern?
  • Risk management employs probability-driven models extensively; what tools should be used to manage sustainability?
  • Risk management solutions are driven by the adequacy of revenues (risk premium); sustainability management solutions must be driven by the objective to maintain adequate capital.

Three Legs of Risk Governance
Risk management deals with specific outcomes that are unknown, but expected loss values can be quantified and thus are known. Capital management deals with known scenarios, but their occurrences are unknown and cannot be quantified, and thus stress testing focuses on specific situations. Sustainability management relates to unknown scenarios that cannot be envisioned, and thus their occurrences are also unknown.

Sustainability management is the third of three distinct legs of risk governance. Each needs its own set of parameters.

Appropriately, because it drives the revenue engine, financial institutions have invested huge amounts of resources in risk management over the last 20 years. Risk management in 2008 was a very sophisticated discipline at financial institutions. It is worth looking into what explains — despite the sophistication — the blind-side blow to almost all financial institutions in 2008-2009.

As recently as the early 1980s, many people considered banking a mature industry. One often heard that the days of double-digit earnings growth for banks were history. Banking appeared to be turning into more of a transaction processing business. Revenue growth seemed to be sputtering and, as the existing pie was being shared with new players such as GE Capital, Charles Schwab & Co. and Fidelity Investments, financial institutions were looking beyond their traditional business to find new revenue sources.

Against this backdrop came a unique combination of factors: quants, securitization and computing power.

Quants turned the old gut-feel-based art of pricing loans and investments into precise mathematical equations. Information was soon dissected in every which way to turn the yield on an instrument into precisely quantified premiums for each type of risk, and thus to create customized products to appeal to specific pools of liquidity ranging from pension funds seeking investment-grade instruments to hedge funds searching for risk to capture higher yields. With the creation of new products, there was almost no aspect of risk that couldn’t be quantified and priced into new forms of transactions.

Securitization allowed financial institutions to turn almost any asset into a security that could be sold more easily in large volumes. With the increasing globalization of financial markets, securitization made it possible to fulfill the liquidity needs of borrowers in a small town in the U.S. from a liquidity pool in unheard-of places in Sweden, Japan or an emirate in the Middle East. Securitization greatly expanded the appeal of newly created, quant-driven products.

It is one thing to write equations and quite another to populate them with data to create a real product that can be analyzed, reviewed, bought and sold easily in financial markets. In fast-paced financial markets, huge amounts of data are useful only if they can be processed quickly and easily to populate equations and formulas. It so happens that this period coincided with an unprecedented leap in the ability to use and manage – as well as an enormous reduction in the cost to process — large amounts information.

The Mature Industry Roars Back
The combination of factors redefined the financial industry: Algorithms were developed by quants, and computing power was harnessed to leverage enormous volumes of data to create financial instruments that could be traded in global markets through securitization.

This was like strapping solid rocket boosters to a craft that was losing altitude. It lifted the financial industry into new orbits. Capital markets that were once viewed as taking away the banking business in the early 1980s became a source of new revenues. Instead of intermediating liquidity globally, the institutions began intermediating risk between the sources of liquidity and the users of liquidity: the parties who may have different views of risk from each other and thus need intermediaries to match liquidity by transforming risk. This turned the old world of the gut feel for risk into rocket science, and risk management became a very sophisticated discipline and the sole focus to recreate, manage, and drive the revenue model.

This sole focus actually became a key factor in the blind-side experience of 2008. Even though the buildup to the crisis was long in the making, no one expected such a sudden impact and its unprecedented magnitude. Some large institutions, shaping and making markets, were gone almost in an instant. Surviving institutions were badly bruised, with some coming close to becoming casualties themselves.

False Sense of Security
Despite the sophistication, advancement and investment of resources in risk management, a disaster happened. Or is it that the disaster happened because of the sophistication?

Enthralled by new revenue opportunities, over the last 25 years the industry focused on what quants and models could do at the expense of realizing what they couldn’t do. In the frenzy to drive revenue models, the downside was so totally eclipsed by the upside that institutions, rating agencies, and regulators neglected to focus on or even ask about the downside.

With each passing year without even a mini crisis that may have highlighted the limitations of quant models, the industry developed a false sense of security. The downside was often equated not to a crater, but, rather, to less upside. It also led institutions and watchdogs to take the sophistication that replaced the gut-feel sense with precise answers to mean something that these models do not do.

Professor Emanuel Derman, a physicist who is director of Columbia University’s program in financial engineering and a former managing director and head of quantitative risk strategies at Goldman Sachs, defines models as “metaphors that compare the object of their attention to something else that it resembles. Resemblance is always partial, and so models necessarily simplify things and reduce dimensions of the world.”

The key word is “resemble,” as models are not capable of exactly and completely duplicating all the possibilities of the real world. Therefore, models have significant limitations. It is human nature that enthusiasm for sophistication can sometimes lead to a discounting of its critical limitations.

It is by design that risk management is about structuring, preserving and protecting revenues and profits, because only the large portion of the risk spectrum can be defined, modeled and quantified. However, the knowledge and recognition of this limitation to the portion – albeit a large portion – of the risk spectrum that can be defined, modeled, and quantified was completely overshadowed by the need to drive the revenue engine.

Whenever someone would ask about the downside from the portion of the risk spectrum thatcan’tbe quantified, either the questions were ignored arrogantly, or the answers related to the quantifiable world were assumed to apply to the realm in which their validity and usefulness is highly questionable.

VaR and What It Misses
Take Value at Risk as an example. An implied extension of VaR to what it is not was the primary reason for complacency and a false sense of security.

VaR has been and continues to be the primary quantitative measure of market risk. There had been so much emphasis on it that it was taken as the sole indicator of an institution’s market risk. But the complexities behind its definition — creating limits — were either not comprehended or were discounted. The most significant limitation is that it does not measure tail risk.

VaR is defined as the worst expected loss over a given time horizon at a given confidence level under normal market conditions. Generally, financial institutions refer to VaR as the maximum one-day loss with a 95% confidence level. For example, a VaR of $10 million means that an institution’s loss from market risk will not exceed $10 million on 95 days out of every 100 days of operating the business under normal market conditions.

In that example, VaR doesn’t say anything about what the maximum loss may be on any of the other five days out of every 100 days, or what the maximum loss could be under not-normal market conditions. Maximum loss under either of those conditions could be many times $10 million. Therefore, not realizing these limitations, and simply assuming $10 million as the maximum exposure on any given day, would constitute a major oversight.

In the absence of a specific mention, it is easy to see how someone may have assumed that VaR was a solid measure of exposure from market risk, including extreme tail risk. After all, it had very sophisticated science behind it. And from the prominent attention devoted to it – with detailed disclosure of data, along with how that data related to daily revenues — in any institution’s annual shareholder report, one could easily assume that it was used primarily and extensively in managing market risk.

The use of risk management models became so acceptable in gauging market risk that even regulators and rating agencies used them to quantify risks in an institution’s portfolio. The Basel Committee on Banking Supervision allowed institutions to rely on their own VaR analysis to establish their capital requirements.

As discussed above, VaR has no direct relationship to the exposure from extreme tail risk, and the purpose of capital is to protect institutions from such extreme exposure. Therefore, the use of VaR in capital models can lead to low regulatory capital requirements if the VaR amount is low, even though the exposure from tail risk may be significantly high. In addition, the use of VaR for this regulatory purpose only advanced its acceptance at the expense of getting a handle on tail risk.

The Danger of Statistical Dependence
The sophistication of quant models, accompanied by the fact that they were coming from rocket scientists whose collective expertise was beyond question, as well as the credibility lent by the appropriately huge investment by institutions, blinded people from remembering what models can and can’t do. In many cases, people could not even comprehend these limitations. Models often are meant to convey one thing through “resemblance,” but due to the lack of an understanding of the full picture, their meaning can become something else.

In the absence of any significant problems with revenue models in the years preceding 2008, a sense of complacency developed that kept people from focusing on the dangers underlying assumptions and what models don’t do. Therefore, the perception of sophistication in what models and risk management can do contributed to: “Who would have thought that …!”

The result was that institutions, rating agencies and regulators were blindsided by relying solely on traditional risk management discipline and by not having a distinct focus on extreme tail risk and the critical need to emphasize it. For some institutions, the price was their going-concern sustainability.

Since 2008, after recognizing the complacency that developed and the failure to fully appreciate the limitations of models, significant resources have been devoted to addressing them. Some even have suggested that the problem has been reduced to a very small portion of the risk spectrum. Has enough been done to avoid the repeat of an experience similar to 2008, or even worse? Can institutions rely on probability-driven models for something as critical as survival?

Karamjeet Paul, author of Managing Extreme Financial Risk: Strategies and Tactics for Going Concerns (Academic Press, September 2013), has over 30 years of operating, finance, treasury and exposure management experience. In the early 1980s, he developed and implemented a pioneering framework at Citicorp that became the foundation of interest-rate exposure management now used by all financial institutions. More recently, as managing principal of Strategic Exposure Group, he has developed the “sustainability management” to enable proactive management of the gap between aggressive growth targets and prudent sustainability goals.  His operating and client experience spans businesses in multiple industries, including banking and financial services, business process outsourcing services, manufacturing and distribution, publishing and pharmaceuticals. He is a graduate of the Indian Institute of Technology, Bombay, and received his MBA from Case Western Reserve University.

One comment on “Book Excerpt: Managing Extreme Financial Risk

  1. Pingback: Book Excerpt: Managing Extreme Financial RIsk | Blog From Strategic Exposure Group

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s