Putting Risk Models to the Test
Steve Wiggins, WG’98, Moody’s Analytics
Steve Wiggins, WG’98, builds, stress-tests and sells custom-made models for hedge optimization, tail risk, high yield, risk-based pricing, portfolio economic capital and other Whartonesque works of creative minds. Steve explains the benefits of models, and what drives risk management, and how he helps firms today.
What is the value of risk analysis models?
Models exist as a parallel universe to fundamental analysis. Think of an asset manager who picks stocks and bonds to put in a portfolio — as a fundamental analyst, he will look at industry factors, management team, company history, balance sheet — and he will use this analysis to make a recommendation. Models perform the same type of function, but they use different criteria that are uncorrelated to the factors that the analyst would use. A model can be used to rank the attractiveness or risk of potential investments. This gives you the benefit of being able to look at your investable universe, using different systems. I have more confidence in making a selection if both systems produce positive signals.
The other goal of a model is to be objective. Take three investments, each in a different sector, covered by a different analyst using the traditional five C’s of credit approach. Each analyst will have a slightly different methodology for evaluating the attractiveness of the investment being looked at. When an investment committee comes together to decide if A is better than B is better than C, they are not comparing apples to apples in as pure a way as they would if they were using a model.
What are the drivers of how financial institutions think about risk today?
Dodd-Frank Act and policy changes at the Federal Reserve are two key drivers. The motivation is to avoid a repeat of the 2008 crisis, by making sure banks and insurance companies are and will remain well-capitalized even under adverse conditions. One guideline that the Dodd-Frank Act asks banks to adopt is the Comprehensive Capital Adequacy Review (CCAR), for stress tests. It mandates that 19, and soon more, banks in the United States must annually prepare a set of in-depth reports for the Federal Reserve, as to what is in their portfolios and how it will perform over the next several years and under adverse conditions. In the most recent submission, four of the banks did not pass, which highlighted gaps that banks tended to have in understanding risk in their own portfolios: a lack of data, and a lack of bottom-up modeling capabilities. The Federal Reserve wants banks to be able to forecast loss estimates for every asset on their books — not just by using generic top-down assumptions of the loss in a corporate loan portfolio.
What is Moody’s Analytics’ role?
Imagine a bank coming to Moody’s and saying, “We lend to companies in the Southeastern United States. Even though we’re a big bank, we don’t have that many loan defaults in our portfolio history, so it’s difficult to predict a default when we don’t have a large sample of default data.” Moody’s Analytics has data from other banks to augment the client bank, which provides it a sufficiently large sample. The second piece is to help it build a more robust default model.
Can you describe some specific bottom-up models?
Probability of default, loss given default and exposure at default models: All three of these models are needed to derive an expected loss calculation, which lets the bank know how much provision to set aside for losses. That translates to how much capital it needs to have to maintain the minimum capital ratios that the regulator specifies.
That may mean the bank must raise more equity to boost its capital ratio, or it may need to sell assets to bring its risk down, given the amount of capital it has. Financial institutions that hold fixed income portfolios need to understand how likely they are to realize a significant decline in the value of their holdings, and how severe that decline in value is likely to be. Economic Capital models attempt to do this by describing a probability distribution of potential future values of the portfolio, and it is the institution’s decision to make as to what confidence interval they are comfortable with, such as 95%, 99% or 99.9%. The higher the confidence interval, the more capital that the institution has to set aside to ensure that it can remain solvent in the event that it experiences sizable losses in the portfolio.
What is your history at Moody’s Analytics?
In 2001, three years after leaving Wharton, I joined KMV, a small but highly regarded risk management firm. Allen Levinson [also interviewed in this newsletter] hired me into KMV, and became my first boss there. A year later, KMV was purchased by Moody’s, its first acquisition among many to build up its risk analytics and reporting capabilities. Each one of the acquisitions was a niche player — including Economy.com run by Mark Zandi, a renowned macroeconomist; Wall Street Analytics, which does modeling for structured finance; and Fermat International, a French enterprise data warehouse company, to give data and technology infrastructure for Moody’s projects.
We managed the acquisitions as a portfolio, but each firm maintained its own brand and was allowed to continue what it had been doing. In 2008, Moody’s decided to amalgamate these companies under a common brand, Moody’s Analytics, which became the second primary line of business, along with Moody’s Investors Service, the rating agency that everyone in the capital markets knows with more than a 100-year history.
How do you make sure that your models perform?
We spend a lot of time working to improve the accuracy and breadth of our models. On our side, we perform regular validation to recalibrate the models as we get new data from the economic environment. Additionally, clients back-test our models. We might go to a hedge fund and say, “We think you can use our model to make better buy and sell decisions. They’ll say, “Prove it!” We’ll say to the fund, “How about you run an exercise where you look at your actual portfolio’s return, and then you go back and say, ‘If I had this model a year ago, and it gave me these signals — that these other bonds, for example, were attractive — and I should buy them and not the ones that I actually did buy.’ Let me hypothetically buy those and see what the performance would have been over the last year. If that performance was better, then the model is shown to have value.” That is the ultimate proof. If you can show them that you could
have made their trades better, then they are sold. We’re happy to put ourselves to the test.
What makes your work interesting or rewarding?
The landscape is constantly changing. The regulatory environment is shifting, modeling techniques evolve, and the market environment surprises you. To pre-empt or respond to these challenges is refreshing. One day, I will be talking about economic capital to a large commercial bank or its stress-testing process, and the next day, I’ll be speaking to a life insurance company about how to improve its risk-adjusted portfolio returns or how to adopt a more risk-informed limits system. I’ll speak to a hedge fund about using relative value analytics to generate higher alpha in its portfolios. Getting to speak with and learn from all sorts of people — from chief risk officers to traders, quants and portfolio managers — is very exciting, and I am constantly learning, which keeps me engaged. I also have the luxury of working with an exceptionally bright, talented, enthusiastic and intellectually curious group of people.