All seminars are held in 1011 Evans Hall at UC Berkeley, unless otherwise notified.

Upcoming seminar

Tuesday, October 23, 2018 11:00 AM to 12:30 PM

Xiang Zhang, SWUFE: Proliferation of Anomalies and Zoo of Factors – What does the Hansen–Jagannathan Distance Tell Us?

Recent research finds that prominent asset pricing models have mixed success in evaluating the cross-section of anomalies, which highlights proliferation of anomalies and zoo of factors. In this paper, I investigate that how is the relative pricing performance of these models to explain anomalies, when comparing their misspecification errors– the Hansen–Jagannathan (HJ) distance measure. I find that a traded-factor model dominates others in a specific anomaly by incorporating the multiple HJ distance comparing inference. However, different from the current research of Barillas and Shanken (2017) and Barillas, Kan, Robotti and Shanken (2018), I result that the HJ distance is a general statistic measure to compare models and some model-derived non-traded factors even outperform traded factors. Second, there is a large variation in the shape and curvature of these confidence sets of anomalies, which makes any single SDF difficult to satisfy confidence sets of anomalies all. My results imply that further work is required not only in pruning the number of priced factors but also in building models that explain the anomalies better.

All seminars


Tuesday, August 28, 2018 11:00 AM to 12:30 PM
Montserrat Guillen, University of Barcelona: Is motor insurance ratemaking going to change with telematics and semi-autonomous vehicles?

Many automobile insurance companies offer the possibility to monitor driving habits and distance driven by means of telematics devices installed in the vehicles. This provides a novel source of data that can be analysed to calculate personalised tariffs. For instance, drivers who accumulate a lot of miles should be charged more for their insurance coverage than those who make little use of their car. However, it can also be argued that drivers with more miles have better driving skills than those who hardly use their vehicle, meaning that the price per mile should decrease with distance driven. The statistical analysis of a real data set by means of machine learning techniques shows the existence of a gaining experience effect for large values of distance travelled, so that longer driving should result in higher premium, but there should be a discount for drivers that accumulate longer distances over time due to the increased proportion of zero claims. We confirm that speed limit violations and driving in urban areas increase the expected number of accident claims. We discuss how telematics information can be used to design better insurance and to improve traffic safety. Predictive models provide benchmarks of the impact of semi-autonomous vehicles on insurance rates.

This talk will cover the award winning paper on semiautonomous vehicle insurance presented in the International Congress of Actuaries in Berlin, June, 2018, which is under revision in Accident Analysis and Prevention and it will also include the contents of a paper entitled “The use of telematics devices to improve automobile insurance rates”, accepted in Risk Analysis.

Read the paper this talk is based on: 201810 201811 Download the slides from this presentation: Guillen_Insurance_Analytics_Webversion


Tuesday, September 4, 2018 11:00 AM to 12:30 PM
Saad Mouti, UC Berkeley: On Optimal Options Book Execution Strategies with Market Impact

We consider the optimal execution of a book of options when market impact is a driver of the option price. We aim at minimizing the mean-variance risk criterion for a given market impact function. First, we develop a framework to justify the choice of our market impact function. Our model is inspired from Leland’s option replication with transaction costs where the market impact is directly part of the implied volatility function. The option price is then expressed through a Black– Scholes-like PDE with a modified implied volatility directly dependent on the market impact. We set up a stochastic control framework and solve an Hamilton–Jacobi–Bellman equation using finite differences methods. The expected cost problem suggests that the optimal execution strategy is characterized by a convex increasing trading speed, in contrast to the equity case where the optimal execution strategy results in a rather constant trading speed. However, in such mean valuation framework, the underlying spot price does not seem to affect the agent’s decision. By taking the agent risk aversion into account through a mean-variance approach, the strategy becomes more sensitive to the underlying price evolution, urging the agent to trade faster at the beginning of the strategy.

Download the slides from this presentation: Presentation_Saad


Tuesday, September 11, 2018 11:00 AM to 12:30 PM
Tamas Batyi, UC Berkeley: Capacity constraints in earning, and asset prices before earnings announcements
This paper proposes an asset pricing model with endogenous allocation of constrained learning capacity, that provides an explanation for abnormal returns before the scheduled release of information about firms, such as quarterly earnings announcements. In equilibrium investors endogenously focus their learning capacity and acquire information about stocks with upcoming announcements, resulting in excess price movements during this period. I show cross-sectional heterogeneity in stock returns and institutional investors' information demand before quarterly earnings announcements that are consistent with the model. The results suggest that limited information acquisition capacity, and investors' optimal allocation response can play a significant role in asset price movements before firms' scheduled announcements.


Tuesday, September 18, 2018 11:00 AM to 12:30 PM
Haosui (Kevin) Duanmu, UC Berkeley: Nonstandard Analysis and its Application to Markov Processes

Nonstandard analysis, a powerful machinery derived from mathematical logic, has had many applications in probability theory as well as stochastic processes. Nonstandard analysis allows construction of a single object - a hyperfinite probability space - which satisfies all the first order logical properties of a finite probability space, but which can be simultaneously viewed as a measure-theoretical probability space via the Loeb construction. As a consequence, the hyperfinite/measure duality has proven to be particularly in porting discrete results into their continuous settings.

In this talk, for every general-state-space continuous-time Markov process satisfying appropriate conditions, we construct a hyperfinite Markov process to represent it. Hyperfinite Markov processes have all the first order logical properties of a finite Markov process. We establish ergodicity of a large class of general-state-space continuous-time Markov processes via studying their hyperfinite counterpart. We also establish the asymptotical equivalence between mixing times, hitting times and average mixing times for discrete-time general-state-space Markov processes satisfying moderate condition. Finally, we show that our result is applicable to a large class of Gibbs samplers and a large class of Metropolis-Hasting algorithms.

Download the slides from this presentation: berkeleytalk


Tuesday, September 25, 2018 11:00 AM to 12:30 PM
Ben Gum, AXA Rosenberg: A Deep Learning Investigation of One-Month Momentum

The one-month return reversal in equity prices was first documented by Jedadeesh (1990), who found that there was a highly significant negative serial correlation in the monthly return series of stocks. This is in contrast to the positive serial correlation of the annual stock returns. Explanations for this effect differ, but the general consensus has been that the trailing one-month return includes a component of overreaction by investors.  Since 1990, the one-month return reversal effect has decayed substantially, which has led others to refine it. Asness, Frazzini, Gormsen, and Pedersen (2017) refine this idea by adjusting MAX5 (the average of the five highest daily returns over the trailing month) for trailing volatility. They define a measure SMAX (scaled MAX5), which is the MAX5 divided by the trailing month dailyreturn volatility. SMAX is designed to capture lottery demand in excess of volatility. They show that SMAX has an even stronger one-month return reversal than trailing month return.

In this talk, I first replicate the results of Jedadeesh and Asness as benchmark models. I confirm that SMAX outperforms simple return reversal over the test period 1993-2017. However, the effectiveness of SMAX declines substantially over the test period. Using an enhanced combination of return statistics, I improve upon SMAX. I further improve upon SMAX by applying Neural Networks to trailing daily active returns. Note that all of these signals decay substantially in effectiveness over the common test period 1998-2017.

Download the slides from this presentation: CDAR_slides


Tuesday, October 2, 2018 11:00 AM to 12:30 PM
Dangxing Chen, UC Berkeley: Predicting Portfolio Return Volatility at Median Horizons

Commercially available factor models provide good predictions of short-horizon (e.g. one day or one week) portfolio volatility, based on estimated portfolio factor loadings and responsive estimates of factor volatility. These predictions are of significant value to certain short-term investors, such as hedge funds. However, they provide limited guidance to long-term investors, such as Defined Benefit pension plans, individual owners of Defined Contribution pension plans, and insurance companies. Because return volatility is variable and mean-reverting, the square root rule for extrapolating short-term volatility predictions to medium-horizon (one year to five years) risk predictions systematically overstates (understates) medium-horizon risk when short-term volatility is high (low). In this paper, we propose a computationally feasible method for extrapolating to medium-horizon risk predictions in one-factor models that substantially outperforms the square root rule.



Tuesday, October 9, 2018 11:00 AM to 12:30 PM
Jacob Steinhardt, Stanford: Robust Learning: Information Theory and Algorithms
This talk will provide an overview of recent results in high-dimensional robust estimation. The key question is the following: given a dataset, some fraction of which consists of arbitrary outliers, what can be learned about the non-outlying points? This is a classical question going back at least to Tukey (1960). However, this question has recently received renewed interest for a combination of reasons. First, many of the older results do not give meaningful error bounds in high dimensions (for instance, the error often includes an implicit sqrt(d)-factor in d dimensions). Second, recent connections have been established between robust estimation and other problems such as clustering and learning of stochastic block models. Currently, the best known results for clustering mixtures of Gaussians are via these robust estimation techniques. Finally, high-dimensional biological datasets with structured outliers such as batch effects, together with security concerns for machine learning systems, motivate the study of robustness to worst-case outliers from an applied direction.
The talk will cover both information-theoretic and algorithmic techniques in robust estimation, aiming to give an accessible introduction. We will start by reviewing the 1-dimensional case, and show that many natural estimators break down in higher dimensions. Then we will give a simple argument that robust estimation is information-theoretically possible. Finally, we will show that under stronger assumptions we can perform robust estimation efficiently, via a "dual coupling" inequality that is reminiscent of matrix concentration inequalities.


Tuesday, October 16, 2018 11:00 AM to 12:30 PM
Tingyue Gan, UC Berkeley: Asymptotic Spectral Analysis of Markov Chains with Rare Transitions: A Graph-Algorithmic Approach

Parameter-dependent Markov chains with exponentially small transition rates arise in modeling complex systems in physics, chemistry, and biology. Such processes often manifest metastability, and the spectral properties of the generators largely govern their long-term dynamics. In this work, we propose a constructive graph-algorithmic approach to computing the asymptotic estimates of eigenvalues and eigenvectors of the generator. In particular, we introduce the concepts of the hierarchy of Typical Transition Graphs and the associated sequence of Characteristic Timescales. Typical Transition Graphs can be viewed as a unification of Wentzell’s hierarchy of optimal W-graphs and Friedlin’s hierarchy of Markov chains, and they are capable of describing typical escapes from metastable classes as well as cyclic behaviors within metastable classes, for both reversible and irreversible processes. We apply the proposed approach to conduct zero-temperature asymptotic analysis of the stochastic network representing the energy landscape of the Lennard-Jones cluster of 75 atoms.

Download the slides from this presentation: 20181016_riskseminar


Tuesday, October 23, 2018 11:00 AM to 12:30 PM
Xiang Zhang, SWUFE: Proliferation of Anomalies and Zoo of Factors – What does the Hansen–Jagannathan Distance Tell Us?

Recent research finds that prominent asset pricing models have mixed success in evaluating the cross-section of anomalies, which highlights proliferation of anomalies and zoo of factors. In this paper, I investigate that how is the relative pricing performance of these models to explain anomalies, when comparing their misspecification errors– the Hansen–Jagannathan (HJ) distance measure. I find that a traded-factor model dominates others in a specific anomaly by incorporating the multiple HJ distance comparing inference. However, different from the current research of Barillas and Shanken (2017) and Barillas, Kan, Robotti and Shanken (2018), I result that the HJ distance is a general statistic measure to compare models and some model-derived non-traded factors even outperform traded factors. Second, there is a large variation in the shape and curvature of these confidence sets of anomalies, which makes any single SDF difficult to satisfy confidence sets of anomalies all. My results imply that further work is required not only in pruning the number of priced factors but also in building models that explain the anomalies better.



Tuesday, November 13, 2018 11:00 AM to 12:30 PM


Tuesday, November 27, 2018 11:00 AM to 12:30 PM