2015 Symposium

On October 16, 2015, data scientists, statisticians, and industry practitioners gathered at UC Berkeley’s California Memorial Stadium for CDAR’s inaugural symposium. The program consisted of three speakers who presented their work and led conversations through question and answer sessions, and three panel discussions. Thank you to our attendees and participants for making CDAR’s first symposium a great success! A briefing of the day’s program is included below, and a pdf program can be downloaded here.

Opening remarks were given by Frances Hellman, Dean of Mathematical and Physical Sciences, and Carla Hesse, Executive Dean for the College of Letters & Science, both from UC Berkeley.

The day’s first panel featured CDAR’s Co-Director Lisa Goldberg and State Street’s Jessica Donohue
Abstract: CDAR applies new technologies to the most important problems in financial economics. Today’s program is a first step toward accomplishing CDAR’s mission, and we invite the audience to join us as we work toward our goals.

Presentations by:

Sanjiv Das, William and Janice Terry Professor of Finance at Santa Clara University’s Leavey School of Business
Modeling Systemic Risks Using Networks
Abstract: A review of network metrics, systemic risk models, and recent work on network models of systemic risk, including an application to real-time risk network monitoring. I will present a new systemic risk score based on individual bank risk and interconnectedness across institutions. In this metric, I will define risk contributions from each entity, risk increments, system fragility, entity criticality. The measure is robust to spillover risk, and splitting up too-big-to-fail banks may not hedge systemic risk.

Kay Giesecke, Associate Professor of Management Science & Engineering at Stanford
Deep Learning for Mortgage Risk
Abstract: An unprecedented number of mortgage defaults in 2007 precipitated one of the greatest financial crises in recent memory. We propose deep neural network models for mortgage delinquency and prepayment, which capture loan-to-loan correlation due to geographic proximity and exposure to common risk factors. Using data for 120 million prime and subprime mortgages originated across the US between 1999 and 2014, the model is shown to provide accurate multi-period forecasts of loan- and pool-level risk.

Peter Bartlett, Professor in Computer Science and Statistics at UC Berkeley
Prediction and Sequential Decision Problems in Adversarial Environments
Abstract: In many decision problems, it is useful to model the process generating the data as an adversary with whom the decision method competes. Even decision problems that are not inherently adversarial can be usefully modeled in this way, since the assumptions are sufficiently weak that effective prediction strategies for adversarial settings are very widely applicable. This talk will review some recent advances in analysis and methods for online decision problems of this kind, and some implications for allocation, prediction, and option pricing.

Panel Discussions:

The Intersection of Data Science and Risk Analytics featured Laurent El Ghaoui and Saul Perlmutter, both from UC Berkeley, with CDAR’s Co-Director Bob Anderson sitting as moderator and panelist.
Abstract: An astrophysicist, a computer scientist working on financial data, and a mathematical economist discuss data science questions from the perspective of their disciplines.

Statistical Implications of Big Data Applied to Risk Modeling featured Ben Davis of Citadel and Philip Stark of UC Berkeley, with State Street’s Jeff Bohn sitting as moderator and panelist.
Abstract: Risk modeling focuses primarily on understanding the drivers behind unexpected losses affecting some particular objective such as meeting future liabilities or maximizing return on capital. More data are now available than ever before to understand distributions of future asset values. Increased computational capacity can now facilitate non-parametric and simulation-based analytical approaches in a way not available until recently. Machine-learning algorithms may lead to better ways to condition risk models on current market and macroeconomic regimes. What are the statistical implications of applying Big-data tools to risk modeling?