← Back to Home

Notes

Technical Writing

Short-form technical notes on topics from my research.

Why Rough Volatility Matters

Something I learned building my deep hedging project

When I first started my deep hedging project, I was generating volatility paths using standard Brownian motion - the textbook approach. The paths looked reasonable, but something felt off when I compared them to real market data. Real volatility is jagged and spiky in ways that smooth GBM paths just aren't.

That led me down the rabbit hole of rough volatility. The key insight is that empirical volatility has a Hurst parameter around H ≈ 0.1, way below the 0.5 you get from standard Brownian motion. Fractional Brownian motion captures this roughness, but there's a catch: naïve fBm simulation is O(N²), which was way too slow for training an RL agent that needed millions of paths.

I spent a good chunk of time implementing the Davies-Harte algorithm, which uses circulant embedding and FFT to bring it down to O(N log N). Getting the numerics right was honestly harder than I expected - there are subtle edge cases with negative eigenvalues in the embedding that took me a while to debug.

The payoff was worth it though. Once I switched to rough paths, the RL agent was forced to learn path-dependent hedging strategies instead of relying on Markovian shortcuts. The final model achieved a 69% reduction in CVaR versus classical delta hedging, which I think speaks to how much the path structure actually matters for risk management.

You Can't Have It All: Kelly Betting

The question that became my honors thesis

This started as a simple question in a probability class: the Kelly criterion tells you to bet f* = edge/odds to maximize long-run growth, but Kelly bettors can experience terrifying drawdowns along the way. Like, theoretically unbounded drawdowns. Can you fix that?

I assumed the answer was yes - surely there's some clever modification that bounds your worst-case loss while keeping growth optimal. That assumption turned into my thesis when I realized the answer is actually no, at least in the settings I care about (heavy-tailed distributions, which describe a lot of real-world betting and investing scenarios).

The intuition is surprisingly clean: bounding maximum drawdown means restricting how much you can bet, and in infinite-variance environments, that restriction creates a permanent drag on growth that never goes away. It's not an engineering problem you can optimize around - it's a fundamental trade-off baked into the math.

The practical upshot of the thesis is a Volatility-Augmented HMM I designed for regime detection. Instead of hard drawdown constraints (which I proved can't work optimally), you detect when you're in a high-volatility regime and scale your bets down adaptively. It's not a perfect solution, but it detects regime shifts about 90% faster than standard approaches, which is good enough to be useful.

Hierarchical Bayes for Covariance Estimation

Why I stopped trusting sample covariance matrices

I had one of those moments in my portfolio optimization project where everything clicked and also fell apart simultaneously. I was running mean-variance optimization on a basket of assets and getting portfolios that looked insane - wildly concentrated, completely unstable from one day to the next. Turns out the sample covariance matrix was the problem.

The issue is well-known but I hadn't internalized it until I saw it firsthand: when you have N assets and only T time periods of data, and T isn't much larger than N, your covariance estimates are basically noise. The optimizer then happily exploits that noise and gives you garbage portfolios.

My solution was to go Bayesian. I set up a hierarchical model with LKJ priors on the correlation structure (which gently pulls correlations toward zero unless the data strongly supports otherwise) and half-Cauchy priors on volatilities (which allow for fat-tailed uncertainty). The inference runs through NUTS in PyMC, which I found surprisingly pleasant to work with once I got past the initial learning curve.

What I like most about this approach is that you get uncertainty quantification for free. Instead of a single covariance estimate, you get a full posterior distribution, so you can actually see how confident (or not) you should be in your portfolio weights. The resulting portfolios are noticeably more stable and have better tail-risk properties than anything I got from sample covariance.