SEM217: Martin Lettau, UC Berkeley: High-Dimensional Factor Models and the Factor Zoo
Tuesday, August 29th @ 11:00-12:30 PM, RM 648 Evans Hall
This paper proposes a new approach to the “factor zoo” conundrum. Instead of applying dimension-reduction methods to a large set of portfolios that are obtained from sorts on characteristics, I construct factors that summarize the information in characteristics across assets and then sort assets into portfolios according to these “characteristic factors”. I estimate the model on a data set of mutual fund characteristics. Since the data set is 3-dimensional (characteristics of funds over time), characteristic factors are based on a tensor factor model (TFM) that is a generalization of 2-dimensional PCA. I find that parsimonious TFM capture over 90% of the variation in the data set. Pricing factors derived from the TFM have high Sharpe ratios and capture the cross-section of fund returns better than standard benchmark models.
SEM217: Shota Ishii, ProssimoTech: Optimizing Financial Supply Chains - a Network Modeling Approach
Tuesday, September 12th @ 11:00-12:30 PM, RM 648 Evans Hall
SEM217: Alex Shkolnik, UC Santa Barbara: On the Markowitz Enigma for Minimum Variance
Tuesday, September 19th @ 11:00-12:30 PM, RM 648 Evans Hall
The Markowitz enigma entails the observation (by R. Michaud) that risk minimizers are, fundamentally, “estimation-error maximizers”. No exception to this principle, is principal component analysis (PCA), which is often used to construct equity risk models. We show that a PCA constructed minimum variance portfolio displays highly counterintuitive properties as more securities are added. For example, the ratio of the actual to the estimated portfolio variance grows without bound. The cause is the systematic (factor) risk that persists even as the number of securities tends to infinity. We derive a correction formula that adjusts the PCA model in such a way that, as the number of securities grows, this systematic risk vanishes. The resulting minimum variance portfolio achieves zero variance asymptotically. Aside from theorems we explore the results numerically by simulating the security returns from a multi-factor model that incorporates market risk as well as style and industry risk factors.
SEM217: Priya Donti, MIT: Optimization-in-the-loop ML for energy and climate
Tuesday, September 26th @11:00-12:30 PM
Addressing climate change will require concerted action across society, including the development of innovative technologies. While methods from machine learning (ML) have the potential to play an important role, these methods often struggle to contend with the physics, hard constraints, and complex decision-making processes that are inherent to many climate and energy problems. To address these limitations, I present the framework of “optimization-in-the-loop ML,” and show how it can enable the design of ML models that explicitly capture relevant constraints and decision-making processes. For instance, this framework can be used to design learning-based controllers that provably enforce the stability criteria or operational constraints associated with the systems in which they operate. It can also enable the design of task-based learning procedures that are cognizant of the downstream decision-making processes for which a model’s outputs will be used. By significantly improving performance and preventing critical failures, such techniques can unlock the potential of ML for operating low-carbon power grids, improving energy efficiency in buildings, and addressing other high-impact problems of relevance to climate action.
SEM217: Emmanouil Platanakis, University of Bath: When Bayes-Stein Meets Machine Learning: A Generalized Approach for Portfolio Optimization
Tuesday, October 3rd @ 11:00-12:30 PM
The Bayes-Stein model is widely used to tackle parameter uncertainty in the classical Markowitz mean-variance portfolio optimization framework. In practice, however, it suffers from estimation errors and often fails to outperform the naive 1/N asset allocation rule. To address this, we develop a generalized counterpart that leverages machine learning (ML) techniques to estimate some core model parameters. Specifically, we propose a time-dependent weighted Elastic Net (TW-ENet) approach predicting expected asset returns, a hybrid double selective clustering combination (HDS-CC) strategy calibrating shrinkage factors, and a graphical adaptive Elastic Net (GA-ENet) algorithm estimating the inverse covariance matrix. Extensive empirical studies show that the ML-augmented model leads to significant and persistent out-of-sample gains over the 1/N strategy. More broadly, our work demonstrates how machine learning can be leveraged to overcome longstanding limitations and unlock value in conventional finance models.
SEM217: Robert Anderson, UC Berkeley & CDAR: General Equilibrium Theory for Climate Change
Tuesday, October 10th @ 11:00-12:30 PM, RM 648 Evans Hall
We propose two general equilibrium models, quota equilibrium and emission tax equilibrium. The government specifies quotas or taxes on emissions, then refrains from further action. Quota equilibrium exists; the allocation of emission property rights strongly impacts the distribution of welfare. If the only externality arises from total net emissions, quota equilibrium is constrained Pareto Optimal. Every quota equilibrium can be realized as an emission tax equilibrium and vice versa. However, for certain tax rates, emission tax equilibrium may not exist, or may exhibit high multiplicity. Full Pareto Optimality of quota equilibrium can often be achieved by setting the right quota.
SEM217: Alec Kercheval, Florida State University: Portfolio Selection via Strategy-Specific Eigenvector Shrinkage
Tuesday, October 17th @ 11:00-12:30 PM, RM 648 Evans Hall [ZOOM]
Portfolio managers need to estimate risk for many assets simultaneously with a limited number of useful observations. The standard approach is to do this using factor models, which reduce the number of variables that need to be estimated in the resulting structured covariance matrix. Even in a one-factor setting, there remains the open problem of finding a good estimate for the leading eigenvector – usually called beta -- representing the loadings on the single factor.
We describe how to apply a statistical approach known as shrinkage to the novel setting of eigenvectors of unknown matrices. We can do so in a way that is customized to the particular constraints of a portfolio optimization problem, resulting in an estimated portfolio that is quantifiably better than one obtained by standard principal component analysis. This is joint work with Lisa Goldberg and Hubeyb Gurdogan.
SEM217: Baeho Kim, Korea University Business School: Conditional Tail Sampling for General Marked Point Processes
Tuesday, October 24th @ 11:00-12:30 PM, RM 648 Evans Hall
This study develops a simple but innovative simulation technique that can be employed in simulating a broad range of marked point processes, conditional on a tail event of interest. Our proposed conditional tail sampling algorithm guarantees that every simulated path hits the tail event with probability one, leading to an efficient estimation of tail probabilities and expected random quantities under the condition of rare events. By transforming the arrival times into uniformly distributed random variables, the simulation process becomes more streamlined and can significantly reduce the simulation variance under the limit of conditional probability measures associated with the specific tail event. Numerical results demonstrate the superior performance of the proposed approach in generating unbiased estimators for rare event probabilities of clustered epidemiological events in a network, calculating conditional expectations of the maximum drawdown for insurance risk analysis, and accurately determining fair credit spreads of risky fixed-income securities on (ultra-)short horizons under realistic yet complex model specifications.
SEM217: Samim Ghamami, U.S. Securities and Exchange Commission, DERA: Skin in the Game: Risk Analysis of Central Counterparties
Tuesday, October 31st @ 11:00-12:30 PM, ZOOM
This paper introduces an incentive compatibility framework to analyze agency problems linked to central counterparty (CCP) risk management. Our framework, which is based on a modern approach to extreme value theory, is used to design CCP skin-in-the-game (SITG). We show that under inadequate SITG levels, members are more exposed to default losses than CCPs. The resulting risk management incentive distortions could be mitigated by using the proposed SITG formulations. Our analysis addresses investor-owned and member-owned CCPs, we also analyze multilayered and monolayer default waterfalls. Viewing the total size of SITG as the lower bound on CCP regulatory capital, the framework can be used to improve capital regulation of investor-owned and member-owned CCPs. We also demonstrate that bank capital rules for CCP exposures may underestimate risk. The broader central clearing mandate of U.S. Treasuries may take place under monolayer CCPs. These clearinghouses may need to allocate more of their own capital to the default waterfall.
SEM217: Lynne Burks, One Concern: Resilience Analytics to Measure Physical Risk at Scale
Tuesday, November 7th @ 11:00-12:30 PM, RM 648 Evans Hall
One Concern has built a Digital Twin of the world around us, revealing hidden risks across the built and natural environments posed by natural disasters, extreme weather, and climate change. We have leveraged a mix of physics-based and machine learning models to predict the impacts of disasters not just on buildings, but on the communities and infrastructure on which they depend. In this presentation, we will discuss the underlying models of One Concern’s products that are used to predict business downtime across multiple hazards and lifelines. These downtimes translate into resilience metrics that facilitate comparisons across buildings, across geographies, and over time. This enables industry and government to understand their true vulnerabilities and perform the most impactful mitigations.
SEM217: Haim Bar, University of Connecticut: On Graphical Models and Convex Geometry
Tuesday, November 28th @ 11:00-12:30 PM
We introduce a mixture-model of beta distributions to identify significant correlations among P predictors when P is large. The method relies on theorems in convex geometry, which we use to show how to control the error rate of edge detection in graphical models. Our ‘betaMix’ method does not require any assumptions about the network structure, nor does it assume that the network is sparse. The results hold for a wide class of data generating distributions that include light-tailed and heavy-tailed spherically symmetric distributions.