All seminars are held in 639 Evans Hall at UC Berkeley, unless otherwise notified.
The Tax-Loss Harvesting Life Cycle
A 43-Year Retrospective of Equity Indexing Strategies for Taxable Investors
Tax-loss harvesting aims to realize losses on individual stocks in conjunction with an investment objective such as index tracking. In this talk, we give a historical appraisal of the value of tax-loss harvesting to taxable investors with realized gains in their portfolios. Our study provides insight into the lifecycle of a tax-loss harvesting strategy, which has its youth, midlife, and golden years.Read the paper this talk is based on.
We develop an estimator for latent factors in a large-dimensional panel of financial data that can explain expected excess returns. Statistical factor analysis based on Principal Component Analysis (PCA) has problems identifying factors with a small variance that are important for asset pricing. Our estimator searches for factors with a high Sharpe-ratio that can explain both the expected return and covariance structure. We derive the statistical properties of the new estimator and show that our estimator can find asset-pricing factors, which cannot be detected with PCA, even if a large amount of data is available. Applying the approach to portfolio and stock data we find factors with Sharpe-ratios more than twice as large as those based on conventional PCA. Our factors accommodate a large set of anomalies better than notable four- and five-factor alternative models.Read the paper this talk is based on.
The theory of inference from simple random samples (SRSs) is fundamental in statistics; many statistical techniques and formulae assume that the data are an SRS. True random samples are rare; in practice, people tend to draw samples by using pseudo-random number generators (PRNGs) and algorithms that map a set of pseudo-random numbers into a subset of the population. Most statisticians take for granted that the software they use "does the right thing," producing samples that can be treated as if they are SRSs. In fact, the PRNG and the algorithm for drawing samples matter enormously. We show, using basic counting principles, that some widely used methods cannot generate all SRSs of a given size, and those that can do not always do so with equal frequencies in simulations. We compare the "randomness" and computational efficiency of commonly-used PRNGs to PRNGs based on cryptographic hash functions, which avoid these pitfalls. We judge these PRNGs by their ability to generate SRSs and find in simulations that their relative merits varies by seed, population and sample size, and sampling algorithm. These results are not just limited to SRSs but have implications for all resampling methods, including the bootstrap, MCMC, and Monte Carlo integration.Read the paper this talk is based on.
We study investments in impact funds, defined as venture capital or growth equity funds with dual objectives of generating financial returns and positive externalities. Being an impact fund elevates a fund’s marginal investment rate by 14.1% relative to a traditional VC fund, even more for funds focused on environmental, poverty, and minority/women issues. Europeans and UNPRI signatories have sharply higher demand for impact. Three investor attributes – household-backed capital, mission-oriented investors, and investors facing political/regulatory pressure to invest in impact – account for the higher impact demand. In contrast, legal restrictions against impact (e.g., ERISA) hinder 25% of total demand.Download slides from this presentation.
This paper presents a formal model for theory of popularity as laid out informally by Idzorek and Ibbotson in their seminal paper, “Dimensions of Popularity (Journal of Portfolio Management, 2014). The paper does this by extending the capital asset pricing model (CAPM) to include security characteristics that different investors regard differently. This leads to an equilibrium in which: 1) The expected excess return on each security is a linear function of its beta and its popularity loadings which measure the popularity of the security based on its characteristics relative to the those of the beta-adjusted market portfolio; 2) Each investor holds a different portfolio based on his attitudes toward security characteristics; and 3) The market portfolio is not on the efficient frontier. I call this extended model the Popularity Asset Pricing Model, or PAPM for short.Download slides from this presentation.
We examine the network of bilateral trading relations between insurers and dealers in the over-the-counter corporate bond market. Using comprehensive regulatory data we find that many insurers use only one dealer while the largest insurers have a network of up to eighty dealers. To understand the heterogeneity in network size we build a model of decentralized trade in which insurers trade off the benefits of repeat business against more intense dealer competition. Empirically, large insurers form more relations and receive better prices than small insurers. The model matches both the distribution of insurers’ network sizes and how prices depend on insurers’ size and the size of their dealer network.Download slides from this presentation.
Managing a portfolio to a risk model can tilt the portfolio toward weaknesses of the model. As a result, the optimized portfolio acquires downside exposure to uncertainty in the model itself, what we call “second order risk.” We propose a risk measure that accounts for this bias. Studies of real portfolios, in asset-by-asset and factor model contexts, demonstrate that second order risk contributes significantly to realized volatility, and that the proposed measure accurately forecasts the out-of-sample behavior of optimized portfolios.Download slides from this presentation.
In this talk we will discuss how the top eigenvalue/eigenvector pair evolves through time for estimators of covariance and correlation matrices of equity return type data. By this we mean that the matrices have a top eigenvalue which is well separated from the others. Our main results are that both the eigenvalue and eigenvector of a correlation matrix has an extra stability effect, which has previously been observed empirically but to our knowledge never previously studied theoretically. Because of this, one has to use different methods for determining and studying the stationarity of correlations than what is used for covariances. The results are also interesting from a practical aspect, as they give intuition on how to check for non-stationaries and how to adapt the estimator to them. They can also be used as a tool to more exactly quantify the correlation risk versus the volatility risk of a portfolio or for fine-tuning of estimators.
Please see the following link for information on the BSTARS Conference 2017. The Seminar will reconvene as usual on April 4, 2017.
Abstract: I analyze the large deviation probability of factor models generated from components with regularly-varying tails, a large subclass of heavy tailed distributions. An efficient sampling method for tail probability estimation of this class is introduced and shown to exponentially outperform the classical Monte-Carlo estimator, in terms of the coverage probability and/or the confidence interval’s length. The obtained theoretical results are applied to financial portfolios, verifying that deviation probability of the return to portfolios of many securities is asymptotically robust against the distributions of asset specific idiosyncratic risks.Read the paper this talk is based on.
Drawdown, and in particular maximum drawdown, is a widely used indicator of risk in the fund management industry. It is a vital metric for a levered investor who can get caught in a liquidity trap and forced to sell valuable positions if unable to secure funding after an abrupt market decline. Moreover, it is a pathwise risk measure in contrast to end-horizon risk diagnostics like volatility, Value-at-Risk, and Expected Shortfall, which are less significant conditioned on a large drawdown. In this talk, I will present ongoing work aimed at computations for Conditional Expected Drawdown, a recently developed extreme risk measure on maximum drawdown, look at risk-based asset allocation under CED and how it compares with other risk measures, CED risk attribution, and more.
We consider a panel of 88 "systematic factors": simple, quantitative procedures that assign scores to a universe of assets using publicly available data. For each factor, we construct idealized daily factor portfolios (long/short, market-neutral) and daily return series for the 16-year period between January 2001 and December 2016. Each of the factor return series has positive sample mean, and for all but twelve, the one-sided t-test rejects the zero-mean hypothesis at the 95% confidence level. Moreover, for the full sample, the factors are nearly uncorrelated, and when we partition the factors into nine clusters by asset class and market, equally weighting within each cluster, the p-value for each cluster is less than 0.0001. The cluster returns are again nearly uncorrelated in the full sample, and the return distribution for each cluster exhibits positive skew. We also verify that the cluster returns are essentially uncorrelated with various market benchmarks, and perhaps more surprisingly, with a panel of "style factor" returns provided by a third-party vendor