
SEM217: Emmanouil Platanakis, University of Bath: When Bayes-Stein Meets Machine Learning: A Generalized Approach for Portfolio Optimization
Tuesday, October 3rd @ 11:00-12:30 PM, ZOOM
The Bayes-Stein model is widely used to tackle parameter uncertainty in the classical Markowitz mean-variance portfolio optimization framework. In practice, however, it suffers from estimation errors and often fails to outperform the naive 1/N asset allocation rule. To address this, we develop a generalized counterpart that leverages machine learning (ML) techniques to estimate some core model parameters. Specifically, we propose a time-dependent weighted Elastic Net (TW-ENet) approach predicting expected asset returns, a hybrid double selective clustering combination (HDS-CC) strategy calibrating shrinkage factors, and a graphical adaptive Elastic Net (GA-ENet) algorithm estimating the inverse covariance matrix. Extensive empirical studies show that the ML-augmented model leads to significant and persistent out-of-sample gains over the 1/N strategy. More broadly, our work demonstrates how machine learning can be leveraged to overcome longstanding limitations and unlock value in conventional finance models.

SEM217: Robert Anderson, UC Berkeley & CDAR: General Equilibrium Theory for Climate Change
Tuesday, October 10th @ 11:00-12:30 PM, RM 648 Evans Hall [ZOOM]
We propose two general equilibrium models, quota equilibrium and emission tax equilibrium. The government specifies quotas or taxes on emissions, then refrains from further action. Quota equilibrium exists; the allocation of emission property rights strongly impacts the distribution of welfare. If the only externality arises from total net emissions, quota equilibrium is constrained Pareto Optimal. Every quota equilibrium can be realized as an emission tax equilibrium and vice versa. However, for certain tax rates, emission tax equilibrium may not exist, or may exhibit high multiplicity. Full Pareto Optimality of quota equilibrium can often be achieved by setting the right quota.

SEM217: Alec Kercheval, Florida State University: Portfolio Selection via Strategy-Specific Eigenvector Shrinkage
Tuesday, October 17th @ 11:00-12:30 PM, RM 648 Evans Hall [ZOOM]
Portfolio managers need to estimate risk for many assets simultaneously with a limited number of useful observations. The standard approach is to do this using factor models, which reduce the number of variables that need to be estimated in the resulting structured covariance matrix. Even in a one-factor setting, there remains the open problem of finding a good estimate for the leading eigenvector – usually called beta -- representing the loadings on the single factor.
We describe how to apply a statistical approach known as shrinkage to the novel setting of eigenvectors of unknown matrices. We can do so in a way that is customized to the particular constraints of a portfolio optimization problem, resulting in an estimated portfolio that is quantifiably better than one obtained by standard principal component analysis. This is joint work with Lisa Goldberg and Hubeyb Gurdogan.

SEM217: Baeho Kim, Korea University Business School:
Tuesday, October 24th @ 11:00-12:30 PM, RM 648 Evans Hall

SEM217: Samim Ghamami, U.S. Securities and Exchange Commission, DERA:
Tuesday, October 31st @ 11:00-12:30 PM, RM 648 Evans Hall

SEM217: Lynne Burks, One Concern:
Tuesday, November 7th @ 11:00-12:30 PM, RM 648 Evans Hall

SEM217: Nick Gunther
Tuesday, November 14th @ 11:00-12:30 PM, RM 648 Evans Hall

SEM217: Happy Thanksgiving: No seminar this week
Tuesday, November 21st @ 11:00-12:30 PM, RM 648 Evans Hall

SEM217: Haim Bar, University of Connecticut:
Tuesday, November 28th @ 11:00-12:30 PM, RM 648 Evans Hall

SEM217: Martin Lettau, UC Berkeley: High-Dimensional Factor Models and the Factor Zoo
Tuesday, August 29th @ 11:00-12:30 PM, RM 648 Evans Hall
This paper proposes a new approach to the “factor zoo” conundrum. Instead of applying dimension-reduction methods to a large set of portfolios that are obtained from sorts on characteristics, I construct factors that summarize the information in characteristics across assets and then sort assets into portfolios according to these “characteristic factors”. I estimate the model on a data set of mutual fund characteristics. Since the data set is 3-dimensional (characteristics of funds over time), characteristic factors are based on a tensor factor model (TFM) that is a generalization of 2-dimensional PCA. I find that parsimonious TFM capture over 90% of the variation in the data set. Pricing factors derived from the TFM have high Sharpe ratios and capture the cross-section of fund returns better than standard benchmark models.

SEM217: Shota Ishii, ProssimoTech: Optimizing Financial Supply Chains - a Network Modeling Approach
Tuesday, September 12th @ 11:00-12:30 PM, RM 648 Evans Hall

SEM217: Alex Shkolnik, UC Santa Barbara: On the Markowitz Enigma for Minimum Variance
Tuesday, September 19th @ 11:00-12:30 PM, RM 648 Evans Hall
The Markowitz enigma entails the observation (by R. Michaud) that risk minimizers are, fundamentally, “estimation-error maximizers”. No exception to this principle, is principal component analysis (PCA), which is often used to construct equity risk models. We show that a PCA constructed minimum variance portfolio displays highly counterintuitive properties as more securities are added. For example, the ratio of the actual to the estimated portfolio variance grows without bound. The cause is the systematic (factor) risk that persists even as the number of securities tends to infinity. We derive a correction formula that adjusts the PCA model in such a way that, as the number of securities grows, this systematic risk vanishes. The resulting minimum variance portfolio achieves zero variance asymptotically. Aside from theorems we explore the results numerically by simulating the security returns from a multi-factor model that incorporates market risk as well as style and industry risk factors.

SEM217: Priya Donti, MIT: Optimization-in-the-loop ML for energy and climate
Tuesday, September 26th @11:00-12:30 PM via ZOOM
Addressing climate change will require concerted action across society, including the development of innovative technologies. While methods from machine learning (ML) have the potential to play an important role, these methods often struggle to contend with the physics, hard constraints, and complex decision-making processes that are inherent to many climate and energy problems. To address these limitations, I present the framework of “optimization-in-the-loop ML,” and show how it can enable the design of ML models that explicitly capture relevant constraints and decision-making processes. For instance, this framework can be used to design learning-based controllers that provably enforce the stability criteria or operational constraints associated with the systems in which they operate. It can also enable the design of task-based learning procedures that are cognizant of the downstream decision-making processes for which a model’s outputs will be used. By significantly improving performance and preventing critical failures, such techniques can unlock the potential of ML for operating low-carbon power grids, improving energy efficiency in buildings, and addressing other high-impact problems of relevance to climate action.