Fall 2016November 29: Robert M. Anderson, CDAR Co-Director: PCA with Model Misspecification
In this project with UC Berkeley PhD Candidate Farzad Pourbabaee, Principal Component Analysis (PCA) relies on the assumption that the data being analyzed is IID over the estimation window. PCA is frequently applied to financial data, such as stock returns, despite the fact that these data exhibit obvious and substantial changes in volatility. We show that the IID assumption can be substantially weakened; we require only that the return data is generated by a single distribution with a possibly variable scale parameter. In other words, we assume that return is R t = v t φ t , where the variables φ t are IID with finite variance, and v t and φ t are independent. We find that when PCA is applied to data of this form, it correctly identifies the underlying factors, but with some loss of efficiency compared to the IID case.
Now assume that the scale parameter is set by a continuous mean-reverting process, such as the volatility of return in the Heston Model. We use an exponentially weighted standard deviation of historical returns, , as an estimate of v t . It is standard practice to estimate risk measures such as Value at Risk (VaR) and Expected Tail Loss (ETL) from the estimated volatility t by assuming that the returns are Gaussian. These Gaussian estimates systematically underforecast VaR and ETL in the presence of variable volatility, excess kurtosis, and negative skew. We propose the “historical method,” using the empirical distribution of R t+1 / t , as a more robust method for estimating VaR and ETL. In simulation, we find the historical method provides accurate forecasts of both VaR and ETL in the presence of variable volatility and excess kurtosis, and accurate forecasts of VaR in the presence of negative skew.November 15: Jim Hawley & Hendrik Bartel, TruValue Labs: Big Data Analytics and ‘Non-Financial’ Sustainability Information—uses of and initial findings from TruValue Labs’ first years
We present an overview of the current state of ESG (environmental, social and corporate governance) data in the context of the value of so-called non-financial information. TruValue Labs generates real-time ESG/sustainability data using natural language processing, machine learning and elements of AI to quantify unstructured (text) information sources. We present an overview of how this quantification process works and follow on by examining a variety of techniques used to analyze this data output. We will present some of those techniques and initial results including TVL (TruValue Labs) data as a leading indicator of stock movements under certain circumstances, TVL (TruValue Labs) data as a predictor of rating changes by leading ESG raters, as well as present a number of other use cases of TVL (TruValue Labs) data. Additionally, we speculate on future uses and types of analysis this unique data set enables.November 8: Farzad Pourbabaee, UC Berkeley: Portfolio selection: Capital at risk minimization under correlation constraint
We studied the portfolio optimization problem in the Black-Scholes setup, subject to certain constraints. Capital at risk (CaR) has turned out to resolve many of the shortcomings of the Value at risk, hence is taken in this presentation as the objective of the optimization problem. Then, the CaR minimizing portfolio is found under a general correlation constraint between the terminal value of the wealth and an arbitrary financial index. Results are derived both in complete and incomplete markets, and finally simulations are performed to present the qualitative behavior of the optimal portfolio. http://www.tandfonline.com/doi/abs/10.1080/14697688.2015.1115891November 1: Danny Ebanks, Federal Reserve: The Network of Large-Value Loans in the US: Concentration and Segregation
On this joint project with Anton Badev, we analyze the universe of large-value loans intermediated through Fedwire, the primary U.S. real-time, gross settlement service provided by the Federal Reserve System for the period from 2007 to 2015. We embed banks' bilateral lending relationships and interest rate quotes in a game on a graph following Badev (2013), for which we approximate the equilibrium play via a k-player dynamic. We document a series of fundamental changes in the topology of bilateral loan network, and propose a framework to study the evolution of concentration of large-value loan intermediaries.October 25: Ryan Copus and Hannah Laqueur, UC Berkeley: Machines Learning Justice: A New Approach to the Problems of Inconsistency and Bias in Adjudication
We consider the two-factor version of a family of time homogeneous interest rate models introduced by Cairns (Math Finance, 2004) in the Flesaker-Hughston positive interest framework. Specifically, we calibrate the model to cross-sectional USD swap and swaption market data, and we compare the corresponding model implied dynamics to that of the swap market rates via PCA. We investigate whether allowing a non-zero lower bound improves the model fit. The model dynamics are reformulated as a two-dimensional Ito process for the short rate and the consol rate (the par yield on a bond with no finite maturity date) in order to relate it to the influential Brennan-Schwartz model of the late 1970’s. As a final digression, we explore the natural space of rates that live between the short rate and the consol rate.October 4: Samim Ghamami, Office of Financial Research: Does OTC Derivatives Reform Incentivize Central Clearing?
Joint Work with Paul Glasserman
Abstract: The reform program for the over-the-counter (OTC) derivatives market launched by the G-20 nations in 2009 seeks to reduce systemic risk from OTC derivatives. The reforms require that standardized OTC derivatives be cleared through central counterparties (CCPs), and they set higher capital and margin requirements for non-centrally cleared derivatives. Our objective is to gauge whether the higher capital and margin requirements adopted for bilateral contracts create a cost incentive in favor of central clearing, as intended. We introduce a model of OTC clearing to compare the total capital and collateral costs when banks transact fully bilaterally versus the capital and collateral costs when banks clear fully through CCPs. Our model and its calibration scheme are designed to use data collected by the Federal Reserve System on OTC derivatives at large bank holding companies. We find that the main factors driving the cost comparison are (i) the netting benefits achieved through bilateral and central clearing; (ii) the margin period of risk used to set initial margin and capital requirements; and (iii) the level of CCP guarantee fund requirements. Our results show that the cost comparison does not necessarily favor central clearing and, when it does, the incentive may be driven by questionable differences in CCPs' default waterfall resources. We also discuss the broader implications of these tradeoffs for OTC derivatives reform.September 27: Mark Flood, Office of Financial Research: Measures of Financial Network Complexity: A Topological Approach
We present a general definition of complexity appropriate for financial counterparty networks, and derive several topologically based implementations. These range from simple and obvious metrics to others that are more mathematically subtle. It is important to tailor a complexity measure to the specific context in which it is used. This paper introduces measures of the complexity of search and netting in dealer markets. We define measures of line graph homology and collateral line graph homology that are sensitive to network interactions, such as collateral commingling and interdependent chains of obligations, that can be difficult or intractable to unwind.September 20: Ram Akella, University of California, Berkeley School of Information and TIM/CITRIS/UCSC: Dynamic Multi-modal and Real-Time Causal Predictions and Risks
There are three major trends in prediction and risk analytics. We describe our research on two fronts and speculate on the third. We do this in the context of healthcare analytics and computational advertising at Silicon Valley firms. We first describe prediction and risk analytics using combined multi-modal numerical data (from vitals and labs) and text data (from notations by doctors and nurses). We describe and analyze dynamic models of patient mortality probabilities, and integrate novel topic modeling to account for topic constraints, and demonstrate superior performance on Intensive Care Unit (ICU) data. We then describe novel experimental design and estimation methods for advertising, using a targeting engine, in the presence of auctions, and the significant benefits in performance versus using (invalid) standard A/B testing (in Silicon Valley firms). We finally speculate on integrated machine learning and causal statistics, and the role of deep learning and reinforcement learning, in these and other problems, including sensor-based IoT in water analytics.September 13: Daniel Mantilla-Garcia, Optimal Asset Management: Disentangling the Volatility Return: A Predictable Return Driver of Any Diversified Portfolio
Abstract: We consider the problem of initial margin (IM) modeling for portfolios of credit default swaps (CDS) from the perspective of a derivatives Central Counterparty (CCP). The CCPs' IM models in practice are based on theoretically-unfounded direct statistical modeling of CDS spreads. Using a reduced-form approach, our IM model based on stochastic default intensity prices the portfolio constituents in a theoretically meaningful way and shows that statistical IM models can underestimate CCPs' collateral requirements. In addition, our proposed Affine jump-diffusion intensity modeling approach illustrates that a counter-cyclical IM scheme can be implemented from a macro-prudential perspective.
This is a joint work with Samim Ghamami (Office of Financial Research) and Dong Hwan Oh (Federal Reserve Board).August 30: Alex Papanicolaou: Testing Local Volatility in Short Rate Models
The first CRMR Risk Seminar of Fall 2016 features work by CDAR Postdoc Alex Papanicolaou.
Abstract: We provide a simple and easy to use goodness-of-fit test for the misspecification of the volatility function in diffusion models. The test uses power variations constructed as functionals of discretely observed diffusion processes. We introduce an orthogonality condition which stabilizes the limit law in the presence of parameter estimation and avoids the necessity for a bootstrap procedure that reduces performance and leads to complications associated with the structure of the diffusion process. The test has good finite sample performance as we demonstrate in numerical simulations.
All seminars took place 11:00AM to 12:30PM in 639 Evans Hall at UC Berkeley (unless otherwise noted)
19: Jeff Bohn, State Street Global Exchange: Simplicity and complexity in risk modeling: When is a risk model too simple?
Most financial risk modelers (like most scientific modelers) attempt to develop models that are as simple as possible while still remaining useful. This said, models can become too simple thereby losing their signalling power. This circumstance can leave a risk manager without guidance as to what material risks a financial institution may be exposed. As financial markets, financial securities and regulations have proliferated, trading off simplicity and complexity becomes a particularly difficult challenge for financial risk modelers. This presentation introduces some thoughts and raises questions with respect to how a risk modeler decides when a risk model is too simple. This discussion will lean toward more practical (as opposed to theoretical) considerations.
26: Roger Craine, UC Berkeley: Safe Capital Ratios for Bank Holding Companies
This paper gives three quantitative answers to Fischer’s question “at what level should capital ratios be set?” based on (1) the FED Stress Tests 2015 (2) VLab’s Systemic Risk measures and (3) our (Craine- Martin) estimates.
This paper compares Safe Capital Ratios for 18 Bank Holding Companies. The Craine-Martin (CM) implied safe capital ratios are the highest averaging 22%, followed by VLab’s averaging 16%, and the FED Stress tests the lowest at 11%. We (CM) find higher implied safe capital ratios than VLab because our specification allows losses at one bank holding company to effect the others. In a crisis accounting for the covariance among bank holding companies returns gives much large losses since their returns and asset values are positively correlated. Both CM and VLab find larger implied safe capital ratios than the Fed Stress Tests because they calculate the loss to the market value of bank equity during a crisis while the Fed stress test calculate the loss the book value of bank equity in a crisis. Book equity values don’t respond very much to a crisis—even a crisis as large as the Gt Recession—so the implied book value safe capital ratios are not as large. Paper link: http://eml.berkeley.edu/~craine/2009/Capital%20Ratios%2001-05-16.pdf
2: Roger Stein, MIT: A simple hedge for longevity risk and reimbursement risk using research-backed obligations
Longevity risk is the risk that the promised recipient of lifetime cashflows ends up living much longer than originally anticipated, thus causing a shortfall in funding. A related risk, reimbursement risk is the risk that providers of health insurance face when new and expensive drugs are introduced and the insurer must cover their costs. Longevity and reimbursement risks are particularly acute in domains in which scientific breakthroughs can increase the speed of new drug development. An emerging asset class, research-backed obligations or RBOs (cf., Fernandez et al., 2012), provides a natural mechanism for hedging these risks: RBO equity tranches gain value as new life- extending therapies are developed and do so in proportion to the number of successful therapies introduced. We use the stylized case of annuity underwriting to show how RBO equity could be used to hedge some forms longevity risk on a retirement portfolio. Using the same framework, we then show how RBO securities may be used to hedge a much broader class of reimbursement risks faced by health insurance firms. We demonstrate how to compute hedge ratios to neutralize specific exposures. Although our analytic results are stylized, our simulation results suggest substantial potential for this asset class to reduce financial uncertainty for those institutions exposed to either longevity or reimbursement risks.
9: Nathan Tidd, Tidd Labs: Predicting Equity Returns with Valuation Factor Models
This presentation summarizes the motivation, methodology, and initial test results of Equity Valuation Factor Models, an adaptation of popular multi-factor modeling techniques that seeks to explain the price and payoff of business ownership as a precursor to explaining equity returns. A departure from traditional returns-based models, the approach produces new information such as current factor prices that inform both risk & returns expectations, with a number of potential applications for real-world investment decisions.
The February 16 Risk Seminar will take place in 1011 Evans Hall
16: Ezra Nahum, Goldman Sachs: The Life of a Quant 1995-2015
In this lecture, I will review different practical risk management and modeling challenges that I encountered during the course of my career (inclusive of my Ph.D years). My review will highlight how the landscape has changed. For instance, while modeling exotic derivatives was the main activity for quants in the late 90s, capital optimization is the most important consideration today.
23: Yaniv Konchitchki, UC Berkeley (Haas): Accounting and the Macroeconomy: The Housing Market
This study introduces a new approach for financial statement analysis—the geographic analysis of firms’ financial statements.
1: Will Fithian, UC Berkeley: Semiparametric Exponential Families for Heavy-Tailed Data**
We propose a semiparametric method for fitting the tail of a heavy-tailed population given a relatively small sample from that population and a larger sample from a related background population. We model the tail of the small sample as an exponential tilt of the better-observed large-sample tail, using a robust sufficient statistic motivated by extreme value theory. In particular, our method induces an estimator of the small-population mean, and we give theoretical and empirical evidence that this estimator outperforms methods that do not use the background sample. We demonstrate substantial efficiency gains over competing methods in simulation and on data from a large controlled experiment conducted by Facebook.
**This is joint work with Stefan Wager.
8: Alex Papanicolaou, Integral Development Corporation: Background Subtraction for Pattern Recognition in High Frequency Financial Data
Financial markets produce massive amounts of complex data from multiple agents, and analyzing these data is important for building an understanding of markets, their formation, and the influence of different trading strategies. I introduce a signal processing approach to deal with these complexities by applying background subtraction methods to high frequency financial data so as to extract significant market making behavior. In foreign exchange, for prices in a single currency pair from many sources, I model the market as a low-rank structure with an additive sparse component representing transient market making behavior. I consider case studies with real market data, showing both in-sample and online results, for how the model reveals pricing reactions that deviate from prevailing patterns. I place this study in context with alternative low-rank models used in econometrics as well as in high frequency financial models and discuss the broader implications of the melding of background subtraction, pattern recognition, and financial markets as it relates to algorithmic trading and risk. To my knowledge this is the first use of high-dimensional signal processing methods for pattern recognition in complex automated electronic markets.
15: Kellie Ottoboni, UC Berkeley: Model-based matching for causal inference in observational studies
Drawing causal inferences from nonexperimental data is difficult due to the presence of confounders, variables that affect both the selection into treatment groups and the outcome. Post-hoc matching and stratification can be used to group individuals who are comparable with respect to important variables, but commonly used methods often fail to balance confounders between groups. We introduce model-based matching, a nonparametric method which groups observations that would be alike aside from the treatment. We use model-based matching to conduct stratified permutation tests of association between the treatment and outcome, controlling for other variables. Under standard assumptions from the causal inference literature, model-based matching can be used to estimate average treatment effects. We give examples of model-based matching to test the effect of packstock use on endangered toads and of salt consumption on mortality at the level of nations.
22: Spring Break (UC Berkeley Campus Closed) The Risk Seminar will reconvene on March 31, 2016.
29: Keith Sollers, UC Davis: Recent Developments in Optimal Placement of Trades
Optimal placement of trades has received more attention recently, particularly in the high-frequency trading venue. We define a formulation of the optimal placement problem and present a closed-form solution to this problem in the discrete-time case. We then discuss the continuous-time case, where optimal solutions exist but no closed-form solution is known. After tuning the models using high-frequency market data, we present numerical solutions in continuous-time and exact solutions in discrete-time.
5: Alex Shkolnik, UC Berkeley: Dynamic Importance Sampling for Compound Point Processes
We develop efficient importance sampling estimators of certain rare event probabilities involving compound point processes. Our approach is based on the state-dependent techniques developed in (Dupuis & Wang 2004) and subsequent work. The design of the estimators departs from past literature to accommodate the point process setting. Namely, the state-dependent change of measure is updated not at event arrivals but over a deterministic time grid. Several common criteria for the optimality of the estimators are analyzed. Numerical results illustrate the advantages of the proposed estimators in an application setting.
12: Alex Shkolnik, UC Berkeley: Identifying Financial Risk Factors with a Low-Rank Sparse Decomposition
Factor models of security returns aim to decompose an asset return covariance matrix into a systematic component and a specific risk component. Standard approaches like PCA and maximum likelihood suffer from several drawbacks including a lack of robustness as well as their strict assumptions on the underlying model of returns.
We survey some modern, robust methods to uniquely decompose a return covariance matrix into a low-rank component and a sparse component. Surprisingly, the identification of the unique low rank and sparse components is feasible under mild assumptions. We apply the method of Chanrasekaran, Parillo and Willsky (2012) for latent graphical models to decompose a security return covariance matrix. The low rank component includes the market and other broad factors that affect most securities. The sparse component includes thin factors such as industry and country, which affect only a small number of securities, but in an important way. We illustrate the decomposition on simulated data, and also an empirical data set drawn from 25,000 global equities.
19: Johan Walden, UC Berkeley (Haas): Trading, Profits, and Volatility in a Dynamic Information Network Model
We introduce a dynamic noisy rational expectations model, in which information diffuses through a general network of agents. In equilibrium, agents’ trading behavior and profits are determined by their position in the network. Agents who are more closely connected have more similar period-by-period trades, and an agent’s profitability is determined by a centrality measure that is closely related to eigenvector centrality. In line with the Mixture of Distributions Hypothesis, the market’s network structure influences aggregate trading volume and price volatility. Volatility after an information shock is more persistent in less central networks, and in markets with a higher degree of private information. Similar results hold for trading volume. The shape of the autocorrelation functions of volatility and volume are related to the degree of asymmetry of the information network. Altogether, our results suggest that these dynamics contain important information about the underlying information diffusion process in the market.
26: Yang Xu, UC Berkeley: Intervention to Mitigate Contagion in a Financial Network
Systemic risk in financial networks has received attention from academics since the 2007-2009 financial crisis. We analyze a financial network from the perspective of a regulator who aims to minimize the fraction of defaults under a budget constraint. Unlike the majority of literature in this field, the connections between financial institutions (hereafter, banks) are assumed unknown in the beginning, but are revealed as the contagion process unfolds. We focus on the case in which the number of initial defaults is small relative to the total number of banks. We analyze the optimal intervention policy first for a regular network consisting of “vulnerable banks”. We then discuss the optimal intervention problem in a more general network setting.
Fall 2015 Weekly Seminars
11:00am to 12:30pm in 639 Evans Hall at UC Berkeley
1st: Jose Menchero, MPAC: Rethinking the Fundamental Law
In this presentation Jose Menchero examines the Fundamental Law of Active Management. He will show that the standard formulation is not strictly correct and unnecessarily limits the applicable use cases of the Fundamental Law. Dr. Menchero will introduce a new formulation of the Fundamental Law by reinterpreting skill and breadth. His new formulation of the Fundamental Law is exact and provides explicit closed-form solutions for the skill and breadth given any set of asset alphas and covariance matrix. The presentation concludes with a discussion of the impact of constraints as measured by the Transfer Coefficient.
8th: Ryan McCorvie, UC Berkeley Statistics: CVA, FVA, KVA and all that: A Survey of Derivatives Valuation Adjustments
In the wake of the financial crisis, financial institutions have attempted to account for the full costs of derivatives, within a mark-to-market framework. These additional costs include: the costs of counter-party bankruptcy, the effect on funding costs for the firm, and the costs related to regulatory capital. The methodologies employed to make these adjustments go beyond the basic theory of risk-neutral pricing. While the practice of these adjustments is becoming standardized within the industry, the theory and public policy implications remain controversial. This talk will summarize some of the major approaches, as well as the major controversies, of derivatives valuation adjustments.
15th: Jin-Chuan Duan, National University of Singapore: Non-Gaussian Bridge Sampling with Applications
This paper provides a new bridge sampler that can efficiently generate sample paths, subject to some endpoint condition, for non-Gaussian dynamic models. This bridge sampler uses a companion pseudo-Gaussian bridge as the proposal and sequentially re-simulates sample paths via a sequence of tempered importance weights in a way bearing resemblance to the density-tempered sequential Monte Carlo method used in the Bayesian statistics literature. This bridge sampler is further accelerated by employing a novel idea of k-fold duplicating a base set of sample paths followed by support boosting. We implement this bridge sampler on a GARCH model estimated to the S&P 500 index series, and our implementation covers both parametric and non-parametric conditional distributions. Our performance study reveals that this new bridge sampler is far superior to either the simple-rejection method when it is applicable or other alternative samplers designed for paths with a fixed endpoint. Two applications are demonstrated — computing SRISK of the NYU Volatility Institute and infill maximum likelihood estimation.
22nd: Mike Mahoney, UC Berkeley Statistics: Community structure in social and financial networks
The concept of a community is central to social network analysis, and communities are thought to be present in and responsible for properties of a wide range of networks such as networks of financial interactions. Motivated by difficulties we experienced at actually finding meaningful communities in very large real-world networks, we have performed a large scale analysis of a wide range of social and information networks. Our empirical results suggest a significantly more refined picture of community structure than has been appreciated previously. Our most striking finding is that in nearly every network dataset we examined, we observe tight but almost trivial communities at very small size scales, and at larger size scales, the best possible communities gradually “blend in” with the rest of the network and thus become less “community-like.” This behavior is not explained, even at a qualitative level, by any of the commonly-used network generation models. Moreover, this behavior is exactly the opposite of what one would expect based on experience with and intuition from expander graphs, from graphs that are well-embeddable in a low-dimensional structure, and from small social networks that have served as testbeds of community detection algorithms. Implications of these results for dynamics, robustness, and risk in networks more generally will be discussed.
29th: Our seminar series will reconvene on October 6th.
6th: Raymond Leung, UC Berkeley: Centralized versus Decentralized Delegated Portfolio Management under Moral Hazard
In the presence of moral hazard over the investment strategy choices, should a principal investor give all his wealth to one generalist agent to manage several investment strategies, or separate his wealth to multiple distinct strategy agents? In this static economy, all individuals have mean-variance preferences over terminal wealth and linear contracts over managed fund performance are offered. In centralized delegation, the principal needs to compensate the agent for the private costs of taking the desired strategy pairs but also the opportunity cost for any foregone long-short strategies the agent could have enjoyed. This additional layer of moral hazard implies if the agent’s strategy choice set is too wide, or that the agent is nearly risk neutral, no centralized contract will exist. In decentralized delegation, one agent’s strategy choice does not affect another agent’s choice, and since the principal decides both portfolio and fee policies, decentralization tolerates wider moral hazard over agents’ investment choice sets. But risk averse decentralized agents do not internalize each other’s investment strategy choices, so their incentive compatibility conditions are analogous to that of risk neutral individuals, which implies sufficiently high private costs could lead to contract nonexistence. Furthermore, this means in a static model, while the principal can coordinate the centralized agent on strategy correlations, it could be impossible under decentralization. In a dynamic model, decentralized agents have a motive to intertemporally hedge their future wealths with that of the principal, and so using the principal’s intertemporal wealth as a bridge between multiple agents’ payoffs, the principal can coordinate correlations between decentralized agents. Even though individuals have mean-variance preferences and only linear contracts are considered, the dynamic model shows that moral hazard endogenously requires the full joint probability distribution of on- and off- equilibrium strategy returns, beyond just the first and second moments. We numerically investigate how tail dependence of strategy returns affect the dynamic contracting environment.
13th: Haoyang Liu, UC Berkeley: Correction for Finite Sample Bias in Mean Variance Frontier
Calculating the mean variance frontier requires knowledge on the asset return covariance matrix. A natural estimate for the population covariance matrix is the sample covariance matrix. However, it has been shown that using the sample covariance matrix in the efficient frontier formula leads to significant risk underestimation. This paper corrects for this downward bias in risk under a time series setting. Our techniques have potential to solve other issues in finance, including finite sample bias in GMM standard error.
20th: Alex Shkolnik, UC Berkeley: Urn Models and Applications in Finance and Economics
In this talk I will survey certain processes with reinforcement and their applications in finance, economics and other disciplines. Particular focus will be placed on the generalized Polya’s and Ehrenfest urn models. I will discuss their applications to consumer behavior, equilibrium selection, stock price fluctuations, trade in financial markets, etc. I will also review probabilistic techniques to analyze such models: Markov chain and martingale methods, spectral methods and Krawtchouk polynomials, branch process embeddings and others. I will conclude with a connection to my recent work on collateralized lending in a network of dealer banks.
27th: Gustavo Schwenkler, Boston University: The Systemic Effects of Benchmarking
We show that the competitive pressure to beat a benchmark may induce institutional trading behavior that exposes retail investors to tail risk. In our model, institutional investors are different from a retail investor because they derive higher utility when their benchmark outperforms. This forces institutional investors to take on leverage to overinvest in the benchmark. Institutional investors execute fire sales when the benchmark experiences shock. This behavior increases market volatility, raising the tail risk exposure of the retail investor. Ex post, tail risk is only short lived. All investors survive in the long run under standard conditions, and the most patient investor dominates. Ex ante, however, benchmarking is welfare reducing for the retail investor, and beneficial only to the impatient institutional investor.
3rd: Nick Gunther, UC Berkeley: Information Theory and Portfolio Selection
Based on entropy and other concepts from statistical mechanics, Claude Shannon and others developed information theory to prove certain optimality results in communications theory, including results in data compression, coding and transmission. One financial application of information theory is the construction of ‘universal portfolios’, portfolios that, under appropriate assumptions, achieve the same asymptotic growth rate as the best constant rebalanced portfolio in hindsight.
This talk will review information theory, discuss applications to betting games, and then develop the universal portfolio construction.
10th: Nicolae Garleanu, UC Berkeley-Haas School of Business: Dynamic Portfolio Choice with Frictions
We show that the optimal portfolio can be derived explicitly in a large class of models with transitory and persistent transaction costs, multiple signals predicting returns, multiple assets, general correlation structure, time-varying volatility, and general dynamics. Our continuous-time model is shown to be the limit of discrete-time models with endogenous transaction costs due to optimal dealer behavior. Depending on the dealers’ inventory dynamics, we show that transitory transaction costs survive, respectively vanish, in the limit, corresponding to an optimal portfolio with bounded, respectively quadratic, variation. Finally, we provide equilibrium implications and illustrate the model’s broader applicability to economics.
17th: Michael Ohlrogge, Stanford: Securitization and Subprime Lending
The years from 2000 to 2007 saw the largest expansion of mortgage credit in American history, fueling an initial boom in wealth before leading to the most severe financial crisis since the Great Depression. Did financial innovations such as private mortgage securitization contribute causally to this build up, or were they slightly more convenient ways to achieve outcomes that would have occurred anyway with conventional financial technology? To answer this question we exploit a natural experiment in which changes in rating policies by the S&P rating agency made it more expensive to privately securitize specific types of mortgages from certain states, but did nothing to impact other mortgages. The rating change was in response to state predatory lending laws that had already been in effect for some time in the affected states. Thus, the rating change was unrelated to any recent shifts in the risk or profitability of the affected mortgages. We exploit a difference in differences technique to examine the impact of increasing the costs of securitization on an effectively random group of mortgages. We find significant reductions in mortgage securitization following the change, but we also find suggestive evidence that some of this reduction was compensated for by an increase in loans held on books of financial institutions. This suggests that while private securitization played a causal role in fueling the recent mortgage expansion, a significant portion of the expansion likely could have occurred even in its absence. We also find that loans that were more difficult to securitize defaulted at significantly reduced rates, suggesting that securitization also reduced lender incentives to carefully screen mortgages.
1st: Martin Lettau, UC Berkeley-Haas School of Business: Capital Share Risk and Shareholder Heterogeneity in U.S. Stock Pricing
Value and momentum strategies earn persistently large return premia yet are negatively correlated. Why? We show that a quantitatively large fraction of the negative correlation is explained by strong opposite signed exposure of value and momentum portfolios to a single aggregate risk factor based on low frequency fluctuations in the national capital share. Moreover, this negatively correlated component is priced. Models with capital share risk explain up to 85% of the variation in average returns on size-book/market portfolios and up to 95% of momentum returns and the pricing errors on both sets of portfolios are lower than those of the Fama-French three- and four-factor models, the intermediary SDF model of Adrian, Etula, and Muir (2014), and models based on low frequency exposure to aggregate consumption risk. None of the betas for these factors survive in a horse race where a long-horizon capital share beta is included. We show that opposite signed exposure of value and momentum to capital share risk coincides with opposite signed exposure to the income shares of stockholders in the top 10 versus bottom 90 percent of the stock wealth distribution. The portfolios are priced as if representative investors from different percentiles of the wealth distribution pursue different investment strategies.
8th: Linda Schilling, University of Bonn: Capital Structure, Liquidity and Miscoordination on Runs
We analyze the impact of capital structure and market liquidity on the probability of runs on financial institutions in a coordination game played by uninsured short-term debt investors. The probability of a run increases in debt if liquidity is high but becomes non-monotone when liquidity dries up. Capital regulation may harm stability when liquidity dries up because the probability of runs may increase in equity. Caution needs to be exercised when using the liquidity coverage ratio as measure of liquidity risk for stability is non-monotone in liquidity coverage. The non-monotone comparative statics arise as we depart from the assumption of global strategic complementarities between actions. Investors draw on a common, finite pool of liquidity when deciding to withdraw. Conditional on a run, actions are strategic substitutes: Withdrawing investors are served sequentially and the probability of obtaining the deposit decreases in the proportion of withdrawing agents.