# Understanding Extreme Risk Crashes and crises occur much more frequently than experts once thought. A branch of statistics known as extreme value theory provides a tool set for anticipating these rare market events.

On September 15, 2008, investment bank Lehman Brothers Holdings filed for Chapter 11 bankruptcy protection after suffering enormous losses during the subprime mortgage crisis, amplified by a widespread credit crunch that made it impossible for Lehman to finance itself. This cataclysmic event, the largest bankruptcy filing in U.S. history, played a major role in the unfolding of the global financial crisis.

The catastrophic drop in global financial markets from September 2008 to March 2009 is generally accepted as an extreme event or outlier compared with markets’ typical day-to-day movements. Yet such extreme events occur much more frequently and have much longer-term consequences than most investors realize. For instance, the 1997 Asian financial crisis and the 1998 Russian debt default both played roles in the collapse of hedge fund firm Long-Term Capital Management, which had to be bailed out in September 1998 for $3.6 billion by a consortium of 14 banks brought together by the New York Fed. Before that, a weak economy coupled with the 1990 oil price shock led to the early-1990s recession in the U.S. And the 1981–’82 recession, spurred by tight monetary policy designed to fight inflation, was one of the worst economic drawdowns in the U.S. after the Great Depression. In the financial world, we have witnessed such extreme events almost every ten years (see Figure 1).

A decade has elapsed since 2008, and fears have reemerged that we may be on the verge of another meltdown, or at least a recessionary downturn. Now more than ever investors, financial institutions and regulators need to think critically about the seemingly impossible task of quantifying, and perhaps even predicting, extreme financial events. A branch of statistics known as extreme value theory (EVT) provides them with a tool to seek to do so.

#### Not-So-Normal Distributions

In the mathematical theory of finance, stock returns are generally believed to follow a normal distribution. The volatility of returns, as defined by the standard deviation, is regarded as the risk associated with stock returns. The main goal of modern portfolio theory is to maximize the risk-return ratio of the portfolio under realistic constraints. However, as Nassim Nicholas Taleb wrote in his well-known book *The Black Swan* about financial risk and uncertainty: “Almost everything studied about social life focuses on the ‘normal,’ particularly with ‘bell curve’ methods of inference that tell you close to nothing. Why? Because the bell curve ignores large deviations, cannot handle them, yet makes us confident that we have tamed uncertainty.”^{1}

Although stock returns are assumed to follow a normal distribution, there is ample evidence that stock return distributions have much fatter tails — that is, events at the extremes — than do normal distributions. As financialization has permeated every big asset class and capital markets have grown more global, the connectivity among asset classes and countries has increased exponentially. This has important implications with regard to the stability of the system as a whole. When these extreme events in the tail occur, they tend to correlate with other financial instruments and permeate the global system.

Indeed, the credit, liquidity and equity crises of the recent past were triggered, or at least catalyzed, by what was perceived to be the weakest link in the financial system at the time. The subprime crisis was set off by defaults by overleveraged home equity and mortgage borrowers that then rippled through to broader credit assets. The Asian financial crisis was catalyzed by lopsided current-account positions and currency pegs, which provoked an emerging-market currency rout, specifically hitting Indonesia, Malaysia, South Korea and Thailand.

**Extreme Value Theory**

This “weakest link” property of the financial system is very similar to the fracture strength of many brittle materials, such as granite and wood. These materials have intrinsic defects that grow and tend to fracture under external stress. Experimentally, the probability that brittle material will survive the stress field obeys the cumulative distribution function of the standard Weibull distribution, one of the three fundamental distributions in large samples of independent random variables.

These distributions have been studied in extreme value theory. The birth of EVT can be traced to 1928, when British statistician Leonard Tippett was working at the British Cotton Industry Research Association. One of Tippett’s objectives was to make cotton thread stronger; his analysis showed that a thread’s strength was limited by the strength of its weakest fibers. With the help of statistician R.A. Fisher,^{2} Tippett was able to derive three asymptotic distributions for the extrema of independent and identically distributed random variables, which are captured in the equations and the graphs below:

1. The Fréchet distribution which results from the maximums of distributions with polynomially decaying tails (known as fat tails). Its cumulative distribution function is of the form (α > 0):

This distribution is often used to analyze maximum one-day rainfalls per year and other weather events.

2. The Weibull distribution, which results from the maximums of distributions with a finite cutoff. Its cumulative distribution function is of the form (α > 0):

The distribution is well suited to survival analysis or failure modeling in reliability engineering, as discussed with fractures.

3. The Gumbel distribution, which results from the maximums of distributions with exponentially decaying tails (or thin tails). Its cumulative distribution function is of the form

Gumbel is useful in predicting the chances of an extreme earthquake, flood or other natural disaster taking place.

**EVT Study of Stock Returns**

To check the assumption of normality in stock returns, we analyzed return distributions for popular stocks such as Facebook and Netflix. The evidence indicates that their distributions are far from normal. For example, Facebook’s zero weekly return peak is nearly twice that of the fitted normal distribution (see Figure 3). Even more intriguingly, following an earnings report on July 2013, Facebook’s stock rose 38.8 percent in one week, a move so drastic that the chance of it happening was supposed to be once in many thousands of years — if the weekly returns were really a normal distribution. Similarly, in Netflix’s early days, its stock price plunged 83 percent in one week, which should have taken place only once in several millions of years if assumptions of normality were valid.

This nonnormality of returns — not only in equities but in other asset classes, such as credit, interest rates, commodities and currencies — naturally lends itself to analyses using the EVT tool set of distribution of extremes. It’s noteworthy that a classical mathematical theory like EVT, which has existed for decades, can find widespread application in modern finance. By applying EVT to the return distribution of equities, we can analyze the distribution of maximal daily absolute returns within a week for selected stocks. The time series of daily returns is divided into consecutive blocks of five days each, and the maximum absolute return (considering both extreme gain and loss) is computed within each block. The distribution of block maximums are then fitted by the three extreme value distributions discussed above.

Surprisingly, for all three stocks we studied —Amazon.com, Exxon Mobil Corp. and Vertex Pharmaceuticals — the extreme value distribution was fitted well by a Fréchet distribution, meaning that the underlying distribution of daily returns had a fat tail and deviated greatly from the normal distribution. These companies operate in very different industries. Most observers generally believe that there is more risk in the pharmaceuticals industry, represented by Vertex, than in the consumer services industry, represented by Amazon, and that consumer services or retail, in turn, is risker than the oil and gas industry, represented by Exxon Mobil. The extreme distribution of stocks in these industries confirms these impressions: The tail of Fréchet distribution grows increasingly fatter from Exxon Mobil to Amazon to Vertex.

**Risk Measures under EVT **

Because extreme value theory focuses on the tail of the probability distribution, it leads to unique statistics dedicated to the extrema of distributions. The fatness of the tail is directly incorporated into the form of extreme value distributions. The importance of tail-related statistics in the context of financial risk measurement makes the EVT tool set a sound framework for monitoring and regulating the risks of extreme events.

One of the most common risk measures designed in the spirit of EVT is value at risk (VaR), introduced in the early 1990s by J.P. Morgan & Co. VaR measures the maximum possible loss in the case of extreme events during a specific time period, excluding those events whose probability is less than a threshold *p* (0 ≤ *p ≤ 1*). Formally, the VaR* _{p}* of a random variable

*X*is defined as

where F_{x} is the cumulative distribution function of X.

VaR measures how much loss can be expected if an extreme event occurs. The conditional value at risk (CVaR), or expected shortfall, is defined as the expected

In Figure 5, assuming p equals 95 percent the one-day 95 percent VaR is the maximum possible daily loss if we exclude the worst 5 percent days and CVaR is average loss encountered on these worst 5 percent days. As you can see in Figure 5, CVaR provides more information than VaR on the tail distribution — that is, the distribution of extreme events. It is a coherent risk measure and has the advantage of being convex, making the optimization problem easier to solve.^{3}

The focus of EVT on distributions of extrema provides powerful tools for estimating risk measures such as VaR and CVaR. For example, apart from the extreme value distributions introduced above, a central distribution in EVT is the generalized Pareto distribution, a technique for modeling the tails of a variety of distributions. The standard cumulative distribution of a generalized Pareto distribution is

The parameter characterizes the fatness of the tail. The larger ζ, the fatter the tail.^{4,5}

For a large class of underlying distribution functions, CVaR can be expressed in a straightforward manner from the generalized Pareto distribution (see Figure 6). Inspired readers can refer to the plethora of research articles on this topic.^{6,7}

**EVT and Regulation**

Extreme value theory establishes a comprehensive framework for analyzing a variety of rare events, spanning fields such as risk measurement, reliability engineering, failure forecasting and survival analysis. In financial risk management, EVT plays a pivotal role in two intuitive frameworks for modeling the distribution of extremes and the distribution of exceedances (the frequency of exceeding some critical value). The resulting risk measures, VaR and CVaR, both grounded in EVT, have gained widespread acceptance in the financial and regulatory communities.

The systemic risk that financial regulators manage naturally lends itself to the weakest link paradigm in EVT, making the EVT extreme event tool set an intuitive framework for monitoring and regulating risks. The evolution of regulatory frameworks in the context of EVT dates back to the credit and currency crises of the 1990s, when the Bank for International Settlements began to revise the Basel I framework to address the three pillars of risk: credit risk, market risk and operational risk. The subsequent Basel II regulations, adopted in June 2004, dictated the amount of capital banks had to maintain, based on the credit risk associated with particular asset classes.^{8} This risk was specified using VaR projections based on historical data, resulting in VaR becoming a well-entrenched EVT tool for regulation.

Though Basel II was able to capture the risks faced by an individual financial institution, it had limited ability to anticipate the systemic risk and contagion that were prominent features of the global financial system during and after the 2008-’09 crisis. Regulators recognized the shortcomings of Basel II in addressing capital adequacy across the entire banking system and the resulting liquidity crises, unanticipated correlations and simultaneous failure of too-big-to-fail institutions, which magnified damaging effects.

A second revised regulatory regime, Basel III, was developed to account for additional capital stored outside individual institutions and to assure capital adequacy under extreme conditions affecting the broader banking sector.^{9,10,11} EVT-based measures like CVaR that measure the average losses in a tail contributed heavily to the understanding of the capital requirements under extreme market conditions. Basel III changes instituted an increase in the minimum tier 1 capital requirement from 4 percent to 6 percent; added a capital buffer of 2.5 percent and a countercyclical buffer as high as 2.5 percent, depending on the national economy; raised the liquidity coverage sufficient to provide one month of survival in a stress scenario; created a net stable funding ratio to encourage efforts toward longer-term resiliency; and mandated a supplemental 3 percent non-risk-based leverage ratio as a backstop to tier 1 capital. EVT-based stress-test analyses provided the quantitative underpinnings necessary for these changes.

Basel III went further than Basel II in limiting the effect of weaker banks on the broader banking sector, but it still falls short of a comprehensive top-down view on systemic risk related to counterparty credit and derivatives, though this was partially addressed by the Dodd–Frank package of regulatory reforms (including the Volcker rule, which prohibits banks from conducting certain speculative investments for their own accounts) passed by the U.S. Congress in 2010.^{12} Further applications of EVT that expand its scope to better quantify this comprehensive top-down systemic risk will likely help shape the regulatory framework in the future.

*Michael Kozlov** is Senior Executive Research Director at WorldQuant, LLC, and has a Ph.D. in theoretical particle physics from Tel Aviv University.*

*Andrew Shen** is a Research Intern at WorldQuant, LLC, and a Ph.D. candidate in physics at MIT.*

*Elton Zhu** is a Research Intern at WorldQuant, LLC, and a Ph.D. candidate in physics at MIT, specializing in quantum information theory.*

**ENDNOTES**

1 Nassim Nicholas Taleb. *The Black Swan: The Impact of the Highly Improbable*. New York: Random House, 2007.

2. R.A. Fisher and Leonard H.C. Tippett. “Limiting Forms of the Frequency Distribution of the Largest or Smallest Member of a Sample.” *Mathematical* *Proceedings of the Cambridge Philosophical Society* 24, no. 2 (1928): 180-190.

3. R. Tyrrell Rockafellar and Stanislav Uryasev. “Optimization of Conditional Value-at-Risk.” *Journal of Risk* 2, no. 3 (2000): 21-42.

4. A.A. Balkema and L. de Haan. “Residual Life Time at Great Age.” *Annals of Probability* 2, no. 5 (1974): 792-804.

5. James L. Pickands. “Statistical Inference Using Extreme Value Order Statistics.” *Annals of Statistics* 3, no. 1 (1975): 119-131.

6. Laurens de Haan and Ana F. Ferreira. *Extreme Value Theory: An Introduction*. New York: Springer, 2007.

7. François Longin. *Extreme Events in Finance: A Handbook of Extreme Value Theory and Its Applications*. Hoboken, NJ: John Wiley & Sons, 2016.

8. Bank for International Settlements (BIS). “Basel II: International Convergence of Capital Measurement and Capital Standards: A Revised Framework” (2004).

9. International Monetary Fund Legal Department. “Restoring Financial Stability — The Legal Response.” *Current Developments in Monetary and Financial Law* 6 (2013).

10. Jeffery Atik. “Basel II and Extreme Risk Analysis.” Loyola-LA Legal Studies Paper No. 2010-40 (2010).

11. BIS. “Basel III: A Global Regulatory Framework for More Resilient Banks and Banking Systems” (2011).

12. Jack Foster. “Changes in U.S. Banking Regulation — Tier 1 Capital Requirements.” New York Institute of Finance.

*Thought Leadership articles are prepared by and are the property of WorldQuant, LLC, and are being made available for informational and educational purposes only. This article is not intended to relate to any specific investment strategy or product, nor does this article constitute investment advice or convey an offer to sell, or the solicitation of an offer to buy, any securities or other financial products. In addition, the information contained in any article is not intended to provide, and should not be relied upon for, investment, accounting, legal or tax advice. WorldQuant makes no warranties or representations, express or implied, regarding the accuracy or adequacy of any information, and you accept all risks in relying on such information. The views expressed herein are solely those of WorldQuant as of the date of this article and are subject to change without notice. No assurances can be given that any aims, assumptions, expectations and/or goals described in this article will be realized or that the activities described in the article did or will continue at all or in the same manner as they were conducted during the period covered by this article. WorldQuant does not undertake to advise you of any changes in the views expressed herein. WorldQuant and its affiliates are involved in a wide range of securities trading and investment activities, and may have a significant financial interest in one or more securities or financial products discussed in the articles.*