All seminars are held in 1011 Evans Hall at UC Berkeley, unless otherwise notified.
Upcoming seminarThere are no upcoming seminars.
- In insurance, underwriting performance is a function of exposures, losses relative to exposures and premiums relative to exposures. Getting losses and loss trends right (--> cost of goods sold) is critically important. A small estimation mistake typically has a large impact on the bottom line.
- Swiss Re is determining loss relevant trends using advanced analytics, often in collaboration with universities, government organizations, NGOs, rating agencies, consultants, investment management firms, lawyers, and others. Findings are used for both capital allocation and experience-based costing analyses.
- In situations where the past is a poor predictor of the future, exposure-based rating analyses using forward-looking models are superior to the traditional experience-based approach. Swiss Re's proprietary forward-looking models are routinely used in costing.
- The Swiss Re Institute professionalizes Swiss Re's R&D to improve its competitive advantage in risk selection and capital allocation in line with Swiss Re's strategic priorities.
Abstract: The talk will center around a set of recent results on the analysis of Google’s PageRank algorithm on directed complex networks. In particular, it will focus on the so-called power-law hypothesis, which states that the distribution of the ranks produced by PageRank on a scale-free graph (whose in-degree distribution follows a power-law) also follows a power-law with the same tail-index as the in-degree. We show that the distribution of PageRank on both the directed configuration model and the inhomogeneous random digraph does indeed follow a power-law whenever the in-degree does, and we provide explicit asymptotic limits for it. Moreover, our asymptotic expressions exhibit qualitatively different behaviors depending on the level of dependence between the in-degree and out-degree of each vertex. On graphs where the in-degree and out-degree are close to independent, our main theorem predicts that PageRank will tend to grant high ranks to vertices with large in-degrees, but also to vertices who have highly-ranked inbound neighbors. However, when the in-degree and out-degree are positively correlated, the latter can potentially disappear, strengthening the impact of high-degree vertices on the ranks produced by the algorithm.Download the slides from this presentation: Olvera_PageRank
This papers deals with the approximation of latent statistical factors with sparse and easy-to-interpret proximate factors. Latent factors in a large-dimensional factor model can be estimated by principal component analysis, but are usually hard to interpret. By shrinking the factor weights, we obtain proximate factors that are easier to interpret. We show that proximate factors consisting of 5-10% of the cross-sectional observations with the largest exposure are usually sufficient to almost perfectly replicate the population factors, even if these do not have a sparse structure. We derive an asymptotic lower bound for the correlation and generalized correlations of proximate factors with the population factors providing guidance on how to construct the proximate factors. Simulations and empirical applications to financial single- and double-sorted portfolios illustrate that proximate factors provide an excellent approximation to latent factors while being interpretable.Download the slides from this presentation: Interpretable Factor Models
Abstract: As technology continues to insinuate itself into all facets of financial services, the insurance industry faces a slow-motion parade of promise, possibilities, prematurity, and pared-down expectations. Digitization, the birth of InsurTech, machine intelligence, and the collection & curation of (orders of magnitude) more structured & unstructured data are changing (and will continue to change) the industry in material ways—not always in line with predictions. This presentation describes (from a large insurer’s perspective) trends and challenges related to how technology and society’s digitization are irrevocably changing risk markets and insurance. Based on the described trends, one nuanced answer will be suggested to the question of whether insurance is being disrupted or transformed.Download the slides from this presentation: Bohn_DigitallyDrivenChangesInsurance_2018_v3.1
Estimating a robust risk model risk for a portfolio that spans multiple asset classes is a challenging task due to the “curse of dimensionality” (i.e., the problem of estimating too many relationships from too few observations). While the sample covariance matrix is easily computed, it is susceptible to capturing spurious relationships that make it unsuitable for portfolio construction purposes. In this talk, we present a new approach for constructing risk models that span multiple asset classes. We also discuss the implications for portfolio risk management and portfolio construction.Download the slides from this presentation: New MAC2 Slide Deck
We conduct a comprehensive analysis on the sequential introductions of dynamic and static volatility interruption (VI) in the Korean stock markets. The Korea Exchange introduced VIs to improve price formation, and to limit damage to investors from brief periods of abnormal volatility, for individual stocks. We find that dynamic VI is effective in stabilizing markets and price discovery, while the effect of static VI is limited. The static VI functions similarly to the pre-existing price-limit system; this accounts for its limited incremental benefit.Read the paper this talk is based on: Kwon-Eom-La-Park-VIs-March-1-2018 Download the slides from this presentation: 2018_March_1_VI_Eom_Risk Seminar_Revised
We develop a methodology to estimate dynamic factor loadings using cross-sectional risk characteristics, which is especially useful when factor loadings significantly vary over time. In comparison, standard regression approaches assume the factor loadings are constant over a particular window. Applying the methodology to a dataset of U.S.-domiciled mutual funds we distinguish the components of active returns attributable to (1) constant factor exposures, for example, a tilt to value stocks; (2) time-varying factor exposures; and (3) security selection. We find large-cap growth funds tend to be concentrated in two factors, momentum and quality, whereas large-cap blend funds have the most factor diversity. With our approach, we find that common measures to gauge manager skill may be misleading. For example, we find no evidence that active share is associated with larger active returns; rather the opposite is true across the whole sample when controlling for factors such as fund size and fees. We also examine factor crowding in common strategies.
The jump threshold perspective is a view of credit risk in which the event of default corresponds to the first time a stock's log price experiences a downward jump exceeding a certain threshold size. We will describe and motivate this perspective and show that we may obtain explicit formulas for default probabilities and credit default swaps, even when the stock has stochastic volatility, the interest rate is stochastic, and the default threshold is a non-constant stochastic process. This talk is based on joint work with Pierre Garreau and Chun-Yuan Chiu.
This talk will explore the cost of implicit leverage associated with an S&P 500 Index futures contract and derive an implied financing rate. While this implicit financing rate was often attractive relative to market rates on explicit financings, the relationship between the implicit and explicit financing rates was volatile and varied considerably based on legal and economic regimes. Among other findings, regulatory reform in 2000 appeared to reduce significantly the spreads between this implicit financing rate and contemporaneous Eurodollar and US Treasury rates.
I document the “securitization and solicited refinancing channel,” a novel transmission mechanism of monetary policy and its heterogenous regional effects. The mechanism predicts that mortgage lenders who sell their originations to Government Sponsored Enterprises or into securitizations no longer hold the loan’s prepayment risk, and when rates drop, these lenders are more likely to signal to their borrowers to refinance, resulting in more borrower refinancing. A regression analysis finds that in response to a decline in mortgage-backed security yields, regions where originate-to-sell-or-
We argue that emotional coloring of experiences via political propaganda has long-term effects on risk taking. We show that living in an anti-capitalist system reduces individuals' willingness to invest in the stock market even decades later. Utilizing a large comprehensive data set of 300,000 clients of a German discount broker, we find that even today East Germans invest less in the stock market, both at the extensive and the intensive margin, are more likely to hold stocks of communist countries, and are less likely to hold stocks of capitalist institutions and countries. Effects are stronger for individuals for whom we expect stronger emotional priming under the communist regime, for example those living in “showcase cities" renamed after communist politicians and in cities of Olympic gold medalists. In contrast, effects are weaker in regions where people had a less positive experience, including areas with high levels of religiosity, areas that experienced significant environmental pollution, and areas where people did not have (Western) TV entertainment. We show that exposure to anti-capitalist propaganda is costly and results in less diversified portfolios, more expensive actively managed fund, and finally, lower risk-adjusted returns. The long-term effects of anti-capitalist propaganda appear to have significant welfare consequences.Download the slides from this presentation: LMNEastWest_Berkeley_12apr2018
Significant market events such as Flash Crash of 2010 undermine the trust of the capital market system. An ability to forecast such events would give market participants and regulators time to react to such events and mitigate their impact. For this reason, there have been a number of attempts to develop early warning indicators. In this work, we explore one such indicator named Probability of Informed Trading (typically shorten as PIN) and its variants. In an earlier test, a variant known as VPIN was demonstrated to show a strong signal more than an hour before the Flash Crash of 2010. There have been a number of articles published on whether or not the VPIN signal is accidental. By employing a supercomputer, we are able to systematically examine the effectiveness of a number of variants of PIN. In this talk, we will discuss how the computing power helps us explore the parameters controlling the performance of PIN, leading us to find more effective way to use PIN.Download the slides from this presentation: 1804-BIPIN-Risk-v0
Statistical arbitrage is a collection of trading algorithms that are widely used today but can have very uneven performance, depending on their detailed implementation. I will introduce these methods and explain how the data used as trading signals are prepared so that they depend weakly on market dynamics but have adequate statistical regularity. The trading algorithm itself will be presented and then a well calibrated version of it will be used on daily SP500 data from 2003-2014. Well calibrated means that the risk associated with this trading algorithm can be identified and controlled effectively. It also emerges from this study of statistical arbitrage algorithms that when tested with real data they can produce strong and steady returns that are essentially decoupled from overall market behavior. (Joint work with J. Yeo.)Download the slides from this presentation: statarbslideUCB