TL;DR: You don’t need one perfect crypto strategy. You need a portfolio of orthogonal sleeves - trend, crash, chop - each with its own risk budget and cost model. A regime classifier identifies the current market state, and a convex allocator routes capital accordingly. The result is reduced variance, tighter drawdowns, and higher compounded returns than any single sleeve alone.
At QuantJourney, Alex and I run three sleeves that delivered >50% over the last three months (gross, period-specific). Here we explain how you can approach the same, and we share some code.
Executive summary
Our portfolio code implements a multi-engine crypto trading system that:
Connects to Binance WebSocket streams to fetch real-time 1-minute kline (candlestick) data for specified symbols (e.g., BTCUSDT, ETHUSDT).
Maintains state for each symbol, including price histories and technical indicators (EMA, MACD, ATR, ADX).
Run three orthogonal trading strategies (“sleeves”):
Long Trend: Captures bullish trends using Donchian channels and ADX.
Short Trend: Captures bearish trends similarly.
Mean-Reversion Scalper: Trades based on MACD crossovers and EMA conditions.
Classifies for market regimes (“Trend,” “Crash,” “Chop”) to guide allocation.
Use a convex allocator to optimize weights across the three strategies based on the detected regime, historical returns, and risk constraints.
Apply vol targeting, cost modeling, and turnover penalties which are first-class citizens.
The result: lower path risk, fewer equity air-pockets, higher geometric return.
Our system is designed for robustness with features like endpoint rotation, reconnection logic, and logging for debugging.
Here's a mini-results table for three sleeves over the last 3 months with 1m bars (backtested on BTC/ETH, fees in, no slippage hocus-pocus). The portfolio Sharpe beat any sleeve solo by 40% thanks to -0.1 inter-corrs - gold for seeing orthogonality in action (read below):
1) Why long-only momentum stalls
We started with a basic long-only momentum sleeve that crushed in the 2024 bull but bled out in the summer chop.
Breakout systems monetize convexity during expansions. Outside those windows they sit flat or decay. Crypto intensifies this: expansions are episodic, crashes are frequent, and chop is common. We can treat that as a state space, not noise.
States:
Trend: directional persistence, mid vol, low reversal rate.
Crash: negative drift, expanding vol, widening spreads.
Chop: high vol, weak trend, high breakout failure rate.
What most newcomers to crypto miss:
Chop is not noise. It is a harvestable mean-reversion regime.
Crashes require borrow and funding control or they hand back your bull-run gains.
Long-only momentum is a sleeve, not a portfolio (or at least shouldn’t be!)
2) Portfolio architecture
Three engines with distinct drivers and holding periods:
Long Trend Sleeve - 1h–1d Donchian/MA with ADX filter. Caps on leverage and single-name exposure.
Short Trend Sleeve - Downside breakout plus breadth breakdown and vol expansion. Exhaustion exits. Borrow/funding-aware. No illiquid alts.
Mean-Reversion (MR) Scalper Sleeve - 1–5m EMA/VWAP reversion. Microstructure filters. Edge = spread capture + selection, not prediction.
Definition of Sleeve = independent module
A sleeve here refers to an independent strategy module, encapsulating its own signal logic for entries and exits on a specific timeframe, a dedicated risk budget via target volatility or max notional, a tailored cost model accounting for fees, slippage, and funding, built-in limits on trade frequency, concentration, and emergency kill-switches, plus standalone P&L tracking for diagnostics.
A sleeve is an independent strategy module. Each has:
Entry/exit logic on a defined timeframe
Its own risk budget (target volatility or max notional)
A cost model (fees, slippage, borrow/funding)
Limits (trade frequency, concentration, kill-switch)
Independent P&L accounting for diagnostics
Correlation target
The goal is to build a portfolio from multiple independent strategies, or “sleeves,” that don’t all move in the same direction at the same time. This creates a smoother, more resilient return profile. We measure how strategies move together using correlation (ρ). Our target is to keep the monthly return correlation between any two sleeves in the -0.2 to +0.2 band. This is the sweet spot. A correlation near zero means the performance of one strategy gives us almost no information about the performance of another. The performance independence between sleeves is the key driver of risk reduction.
The Mathematics of Radical Risk Reduction
When you combine two strategies (with equal volatility σ) that have a low correlation, you dramatically reduce the portfolio’s overall risk (variance). The percentage of variance you eliminate, ΔVar
, is given by this simple formula:
Let’s use a real-world example. If we combine two sleeves with a correlation `ρ0; of just -0.1, look at the impact:
This variance reduction is material. Even modestly negative correlations compound into large portfolio benefits. By combining two largely unrelated strategies, we erase 55% of the portfolio’s variance. This “saved” risk is the raw material we use to build a more efficient portfolio. We then recycle this capacity, using it to increase our exposure and target higher compounded gains, all while staying within our original risk budget.
Orthogonality in Practice
This isn’t just theory. Consider a portfolio designed to trade crypto markets through various regimes:
A long-trend sleeve that thrives in bull markets (e.g., +0.6 correlation to BTC).
A crisis shorting sleeve that profits from crashes (e.g., -0.8 correlation to BTC).
A scalping sleeve that performs best in choppy, sideways markets (near-zero correlation).
Individually, each has its weaknesses. Combined, their low inter-correlation creates a system that can perform across different market conditions. This mirrors classic equity statistical arbitrage setups, where momentum longs are paired with value shorts to smooth returns and shave drawdowns by as much as 40%.
This discipline is enforced systematically. Our code utilizes an EWMA (Exponentially Weighted Moving Average) covariance matrix to constantly monitor the relationships between sleeves. By giving more weight to recent data, it allows us to react quickly if correlations begin to drift, automatically derating weights to cut off tail risks while preserving the nonlinear upside potential that each individual strategy offers. It’s the same daily discipline multi-asset desks use to balance their books, but executed with algorithmic precision.
class Sleeves:
"""
Sleeves for a single symbol, updated incrementally.
"""
@staticmethod
def long_trend(s: SymbolState) -> int:
"""
Long trend sleeve for a single symbol, updated incrementally.
"""
# warm-up gate (Donchian + ADX stabilize)
if len(s.deq_high) < DON_LEN or s.ind.adx14 is None or s.ind.ema50 is None:
return s.pos_long
close = s.deq_close[-1]
don_h = max(s.deq_high)
# entry/exit
if s.pos_long == 0 and close > don_h and s.ind.adx14 >= 20.0:
return 1
# exit
if s.pos_long == 1 and (close < s.ind.ema50 or s.ind.adx14 < 20.0):
return 0
return s.pos_long
@staticmethod
def short_trend(s: SymbolState) -> int:
"""
Short trend sleeve for a single symbol, updated incrementally.
"""
# warm-up gate (Donchian + ADX stabilize)
if len(s.deq_low) < DON_LEN or s.ind.adx14 is None or s.ind.ema50 is None:
return s.pos_short
close = s.deq_close[-1]
don_l = min(s.deq_low)
# entry/exit
if s.pos_short == 0 and close < don_l and s.ind.adx14 >= 20.0:
return -1
# exit
if s.pos_short == -1 and (close > s.ind.ema50 or s.ind.adx14 < 20.0):
return 0
return s.pos_short
@staticmethod
def mr_scalper(s: SymbolState) -> int:
"""
Mr scalper sleeve for a single symbol, updated incrementally.
"""
# warm-up gate (MACD + EMA stabilize)
if s.ind.macd_line is None or s.ind.macd_sig is None or s.ind.ema9 is None:
return s.pos_mr
close = s.deq_close[-1]
m, sg = s.ind.macd_line, s.ind.macd_sig
pm, ps = s.prev_macd, s.prev_sig
nxt = s.pos_mr
# entry/exit
if pm is not None and ps is not None:
if (close > s.ind.ema9) and (pm <= ps) and (m > sg):
nxt = 1
elif (close < s.ind.ema9) and (pm >= ps) and (m < sg):
nxt = -1
# update previous macd and signal
s.prev_macd, s.prev_sig = m, sg
return nxt
3) Regime inference and risk routing
The system doesn’t try to predict the future. Its focus is on classifying the present state with enough fidelity to route risk intelligently.
First version: three regimes
At the start, we assumed the market was always in one of three coarse states:
Trend: sustained directional movement, either up or down, with conviction.
Breakouts hold, pullbacks are shallow, breadth is high, volatility is moderate.
Crash: sharp, high-volatility selling, a panic-driven form of trend.
Returns negative across horizons, vol and correlations spike, liquidity thins.
Chop: sideways, noisy, with frequent failed breakouts and reversals.
Trend signals fail, mean-reversion dominates, dispersion between assets rises.
This classification was driven by a dashboard of simple but diverse real-time features:
Trend strength: ADX, R-squared of price vs. time.
Volatility: EWMA σ, ATR percentile.
Breakout quality: hold rate after highs/lows.
Breadth: share of coins above MA, cross-sectional dispersion.
Microstructure: order-book imbalance, short-horizon reversion scores.
Evolution: structured state machine
The regime layer has since evolved. Instead of a hard label (“trend or not”), it now works as a state machine with:
Adaptive thresholds (updated via rolling quantiles).
Probabilistic scoring (softmax over logits rather than binary rules).
Hysteresis and dwell-time (reduces flapping between states).
Roadmap extensions
Future roadmap: extend regime classification with Level-2 order book signals and ML models. Current implementation is rule-based; ML is planned but not yet integrated.
Level-2 order book signals: imbalance, queue position, liquidity shifts.
ML classifiers: gradient boosting or shallow NN over handcrafted features to capture nonlinear boundaries.
Variable positioning: sleeves don’t just flip on/off; position size scales with regime confidence.
More states: beyond {Trend, Crash, Chop}, possible extensions include “Low-Vol Drift,” “Euphoric Melt-Up,” “Capitulation Flush”.
Coin selection: sleeves need not run on every asset; a filter can activate only those with liquidity, dispersion, or favorable regime alignment.
Risk Routing (How we adapt our strategy)
Once the regime is inferred, the allocator moves the portfolio’s risk budget across sleeves. Baseline target maps:
Trend → [0.60,0.10,0.30]
Crash → [0.10,0.60,0.30]
Chop → [0.15,0.15,0.70]
The individual sleeves continue to manage their own volatility targets, but the allocator decides how much capital and risk each sleeve is allowed to deploy. These weights are treated as soft targets and adjusted by:
Regime probabilities: stronger confidence → stronger tilt.
Correlation monitoring: if sleeves converge, allocations are derated.
Turnover penalties: allocator avoids over-trading on marginal shifts. This routing principle prevents any single sleeve from dominating in the wrong environment and ensures the portfolio can adapt as the tape transitions.
Here is the logic behind the specific weightings:
In a Trend Regime → Allocation: [Trend: 0.60, Crash: 0.10, Chop: 0.30]
Why 60% to the Trend sleeve? This is the sleeve’s ideal environment. The allocator “presses the bet,” giving the most risk to the strategy designed to capture strong, directional moves.
Why only 10% to the Crash sleeve? This sleeve (likely a shorting strategy) will probably lose a small amount of money in a strong uptrend. This allocation acts as a cheap hedge or “portfolio insurance” in case the trend suddenly and violently reverses.
Why 30% to the Chop sleeve? Even strong trends have periods of consolidation. A market-neutral scalping or mean-reversion sleeve can generate uncorrelated returns during these pauses, providing diversification and a secondary source of profit.
In a Crash Regime → Allocation: [Trend: 0.10, Crash: 0.60, Chop: 0.30]
Why 60% to the Crash sleeve? This is the moment the crisis-alpha or shorting strategy was built for. It gets the majority of the risk budget to capitalize on the panic.
Why only 10% to the Trend sleeve? The long-trend strategy is getting hurt. The allocator drastically cuts its risk to prevent catastrophic losses. It’s not set to zero because V-shaped bounces can be violent, and this small allocation can capture an immediate reversal.
Why 30% to the Chop sleeve? Crashes are defined by extreme volatility. A market-neutral scalping strategy can thrive on this volatility without taking on directional risk, making it an excellent diversifier when directional bets are failing.
In a Chop Regime → Allocation: [Trend: 0.15, Crash: 0.15, Chop: 0.70]
Why 70% to the Chop sleeve? In a sideways, directionless market, trend-following strategies get whipsawed and lose money. The scalping/mean-reversion sleeve is in its perfect environment, profiting from small oscillations. It gets the lion’s share of the risk.
Why 15% to Trend and 15% to Crash? This is the most subtle and important part. In a chop, you are waiting for the market to decide its next direction. These small allocations act as “scouts.” They are waiting to catch the very beginning of the next breakout, whether it’s up or down. As soon as a direction emerges, the corresponding sleeve will start to perform, the regime inference model will detect the change, and the allocator will quickly shift the weights to the new Trend or Crash regime. This prevents the system from being caught completely flat-footed when the market’s character changes.
4) Convex allocator that respects costs
In the previous step, our regime model gave us a set of ideal target weights, like [0.60, 0.10, 0.30]
for a Trend regime. However, blindly forcing our portfolio to these new weights every single day would be inefficient and costly. Markets are noisy, and excessive trading (turnover) can bleed profits away through commissions and slippage.
This is where our convex allocator comes in. It’s a daily optimization process that finds the smartest possible allocation for today. Its job is to strike a perfect balance between three competing goals:
Minimize Risk: Adhere to the principles of modern portfolio theory.
Control Costs: Avoid unnecessary trading.
Follow the Strategy: Stay true to the high-level regime target.
To do this, we solve a mathematical optimization problem, known as a convex program.
The Optimization Problem in Plain English
Every day, the allocator seeks to find the set of weights (w
) that minimizes a combined objective function:
Let’s break down each component:
wᵀΣw
(The Risk Minimizer): This is the classic formula for portfolio variance.Σ
(Sigma) is the EWMA covariance matrix of our sleeve returns, which tells us how our strategies are currently moving in relation to each other. This term’s goal is to find the combination of weights that creates the smoothest possible return stream (lowest volatility).To stabilize noisy Σ estimates for small sample windows we may apply covariance shrinkage (Ledoit–Wolf or simple target shrinkage) before feeding Σ to the optimizer.
def compute_shrunk_cov(X_returns, method="ledoit", delta=0.10, min_obs=50): """ X_returns: np.array shape (T, N) or pd.DataFrame -> returns per period (not percent-scaled mix) method: "ledoit" or "target" delta: shrinkage intensity for target shrinkage (publish and sweep) """ X = np.asarray(X_returns, dtype=float) # drop rows with any NaN/inf X = X[np.isfinite(X).all(axis=1)] T, N = X.shape # safety fallback for very small samples if T <= max(2 * N, min_obs): # very small sample: return scaled identity (safe) or apply heavy shrink S = np.cov(X, rowvar=False) if T > 1 else np.eye(N) mu = np.trace(S) / N return (mu * np.eye(N) + 1e-6 * np.eye(N)) # compute covariance if method == "ledoit": lw = LedoitWolf().fit(X) S_shrink = lw.covariance_ else: S = np.cov(X, rowvar=False) mu = np.trace(S) / N S_shrink = (1.0 - delta) * S + delta * mu * np.eye(N) # numerical nugget S_shrink += 1e-6 * np.eye(N) return S_shrink
λ ||w - w_{t-1}||²
(The Cost Controller): This is the turnover penalty. It measures the difference between the new weights (w
) and yesterday’s weights (w_{t-1}
). By squaring this difference, it heavily penalizes large, sudden shifts in the portfolio. The parameterλ
(lambda) acts like a thermostat for cost sensitivity; a higherλ
makes the allocator more reluctant to trade, preserving capital.γ ||w - w^{tgt}||²
(The Strategic Guide): This is the regime target penalty. It measures how far the new weights (w
) deviate from the ideal target weights (w^{tgt}
) provided by our regime model. The parameterγ
(gamma) controls how strongly we adhere to the strategic plan. A higherγ
ensures the portfolio stays closely aligned with the regime’s intended allocation.
The Rules (Constraints)
The optimizer must find its solution while respecting a set of non-negotiable rules:
∑ wᵢ = 1
: The weights must sum to 100%. We are always fully invested.0 ≤ wᵢ ≤ w_max
: Weights must be positive (no shorting of entire sleeves), and no single sleeve can exceed a maximum allocation (w_max
, e.g., 90%). This prevents over-concentration and ensures the portfolio remains diversified.
Why is This Approach Superior?
This method is far more intelligent than simpler alternatives:
It Beats Naive Allocation: Simply jumping to the target weights ignores both transaction costs and the real-time covariance between strategies. Our optimizer accounts for both, leading to a more robust and profitable outcome. The text notes that even a small turnover penalty often cuts realized trading friction by 20-30% with a negligible impact on the Sharpe ratio.
It’s Guaranteed to be Optimal: This is a convex problem. In mathematical terms, this means the objective function is a smooth bowl shape, and the constraints define a simple feasible region. As a result, there is a single, guaranteed global best solution. We don’t have to worry about the solver getting stuck in a suboptimal local minimum.
Implementation Details
Technology: This is implemented in code using the
CVXPY
library in Python, which is designed specifically for defining and solving convex optimization problems. It uses an underlying high-speed solver (like OSQP) that typically finds the optimal solution in under a second.Robustness: In the rare event that the solver fails, the system has a simple and robust fallback: it sets today’s weights to a 50/50 blend of yesterday’s weights and the new regime target (
0.5 * w_prev + 0.5 * w_tgt
). This ensures a smooth, sensible allocation even if the primary process encounters an error.
def solve_weights(
S: np.ndarray,
w_prev: np.ndarray,
w_target: np.ndarray,
lam_turn: float = 5e-3,
gamma_tgt: float = 5e-2,
wmax: float = 0.9
) -> np.ndarray:
"""
Solve weights for a given covariance matrix, previous weights, and target weights.
Maths:
min w^T Σ w + λ||w - w_prev||^2 + γ||w - w_target||^2
s.t. sum w = 1, 0 <= w_i <= wmax
Args:
S: Covariance matrix
w_prev: Previous weights
w_target: Target weights
lam_turn: Turnover penalty
gamma_tgt: Target penalty
wmax: Maximum weight
Returns:
Weights
"""
try:
w = cp.Variable(3)
obj = cp.quad_form(w, S) + lam_turn*cp.sum_squares(w - w_prev) + gamma_tgt*cp.sum_squares(w - w_target)
cons = [cp.sum(w) == 1, w >= 0, w <= wmax]
cp.Problem(cp.Minimize(obj), cons).solve(solver=cp.OSQP, verbose=False)
val = np.array(w.value).reshape(-1)
if not np.all(np.isfinite(val)): raise RuntimeError
except Exception as e:
logger.warning(f"Allocator fallback to blend: {type(e).__name__}: {e}")
val = 0.5*w_prev + 0.5*w_target
val = np.clip(val, 0, wmax)
s = val.sum()
return val / s if s > 0 else np.array([1/3,1/3,1/3])
The convex allocator is the portfolio’s final control layer, translating regime targets into cost-aware, risk-managed weights. It translates the high-level strategic goal from the regime model into a real-world portfolio that is risk-managed, cost-efficient, and optimally positioned for the current market.
5) Cost-aware execution
Every trade we make pushes the market, however slightly. We must model this cost explicitly. A standard model for estimating the cost of executing an order of size Q
; is:
Q
is our order size, typically measured as a percentage of the average daily volume (% ADV) over our trade horizon.The linear term (
a · Q
) represents the fixed cost of crossing the bid-ask spread.The quadratic term (
b · Q²
) represents the non-linear “market impact”—the additional slippage we incur as our large order consumes liquidity and moves the price against us.
Dynamic Venue & Order Type Mix
Not all orders are created equal. We dynamically choose our execution method based on the signal’s urgency.
Taker Orders: When a signal is strong and urgent, we execute as a “taker,” hitting the bid or lifting the offer. This guarantees a fast fill but means we pay the full spread.
Maker Orders: When a signal is less urgent, we act as a “maker,” placing a passive limit order on the book. This allows us to earn the spread (or a rebate), but we risk the market moving away from our order and not getting filled.
Our system uses an “urgency score” from the alpha model to decide the optimal maker/taker split for every trade.
Hunting for “Queue Alpha”
Simply placing a passive maker order is not enough. We actively seek “queue alpha” by timing our placement. This means we analyze the order book’s microstructure—specifically quote imbalances and cancel rates—to find moments of positive selection. The goal is to post our limit order just before the price is likely to move in our favor, increasing our probability of a favorable fill.
A Protective Kill-Switch
No execution model is perfect. We need a safety mechanism. We constantly monitor our realized slippage (the difference between the expected fill price and the actual fill price) against our model’s prediction. If the realized slippage consistently exceeds the model’s forecast by a predefined amount (e.g., two standard deviations over the last M
trades), a kill-switch is triggered. This aborts the execution algorithm to prevent a flawed model from bleeding the account dry.
6) Sleeve rules that survive live
The Trend Long Sleeve
This sleeve is designed to capture sustained, directional upward moves.
Entry Signal: The entry is a two-part confirmation. We enter on an N-day breakout (e.g., price hitting a new 50-day high), but only if the ADX indicator is above 20. This ADX filter is crucial; it confirms that the breakout is occurring within a strong, directional trend, not just random market noise. This helps avoid “false breakouts” in choppy markets.
Exit Signal: We exit under two conditions. The first is a clear stop-loss, like an opposite breakout (e.g., price hitting a new 20-day low). The second is more proactive: we exit if the trend’s conviction fades, which we measure by a collapse in the R² of price regressed against time. This allows us to get out before the trend fully reverses.
Position Sizing: Positions are volatility-scaled, meaning we take smaller positions in highly volatile assets to equalize risk across the portfolio. All positions are also liquidity-capped, ensuring we never trade a size so large that it would significantly impact the market and incur prohibitive costs.
Whipsaw Damping (Optional): To reduce whipsaws, the sleeve can require agreement between a fast (20-day) and slow (100-day) breakout. A trade is only taken if both signals agree, which significantly reduces entries on short-lived, “whipsaw” moves.
The Short Sleeve
This sleeve is designed to profit from market downturns and crashes. Shorting is inherently risky, so its rules are even more stringent.
Entry Signal: This requires a confluence of three factors: a downside price breakout, confirmation that market breadth is negative (most assets are falling), and a spike in volatility. This multi-factor trigger ensures we are only shorting during a genuine, market-wide panic, not just because a single asset is having a bad day.
Exit Signal: Exits are based on signs of panic exhaustion (e.g., a massive volume spike on a down day) or a simple time-stop. The time-stop is critical; because markets have a natural upward drift, we cannot afford to stay in a short position that slowly grinds against us while racking up borrowing fees.
Hard Controls: This sleeve operates with several non-negotiable safety checks: it verifies borrow availability, places caps on acceptable borrow rates, and uses a spread-to-ATR filter to avoid trading illiquid assets where costs would be astronomical. It also maintains a no-go list of assets that are historically too volatile or expensive to short.
The Mean-Reversion / Scalper Sleeve
This sleeve thrives in choppy, non-directional markets by making many small trades that bet on prices reverting to their recent average.
Signal Generation: Signals are derived from indicators that measure how “overstretched” the price is, such as its Z-score relative to a moving average (price-to-EMA z) or its deviation from the session’s anchored-VWAP. All signals are calculated on closed bars only to prevent acting on incomplete information and introducing lookahead bias.
Risk Management: Risk is managed tightly with time-stops (exit if the trade isn’t profitable within K bars) and ATR-based stops and targets. To prevent over-trading, there are strict caps on the maximum number of trades allowed per day.
Cost Control: This sleeve is highly sensitive to costs. It only crosses the spread (uses an aggressive taker order) for the strongest signals. For all other trades, it uses passive maker orders and relies on an online slippage model that adapts to current market conditions.
Self-Improving Instrumentation: The sleeve is designed to learn. It logs every fill, snapshots the order book, and constantly calculates its selection alpha (i.e., “are my trades more informed than random?”). If it identifies specific market conditions or time windows where its trades consistently perform poorly (negative selection), it automatically learns to stop trading during those periods.
Code: engines, allocator, regime
import numpy as np, pandas as pd
def ema(s, n): return s.ewm(span=n, adjust=False).mean()
def macd_lines(close, f=12, s=26, sig=9):
m = ema(close, f) - ema(close, s)
ms = m.ewm(span=sig, adjust=False).mean()
return m, ms
def adx(df, n=14): # compact Wilder ADX
up, dn = df.high.diff(), -df.low.diff()
plus_dm = ((up > dn) & (up > 0)).astype(float) * up
minus_dm = ((dn > up) & (dn > 0)).astype(float) * dn
tr = pd.concat([(df.high-df.low),
(df.high-df.close.shift()).abs(),
(df.low -df.close.shift()).abs()], axis=1).max(axis=1)
atr = tr.ewm(alpha=1/n, adjust=False).mean().replace(0, np.nan)
pdi = 100*(plus_dm.ewm(alpha=1/n, adjust=False).mean()/atr)
mdi = 100*(minus_dm.ewm(alpha=1/n, adjust=False).mean()/atr)
dx = 100*(pdi.subtract(mdi).abs()/(pdi+mdi)).fillna(0)
return dx.ewm(alpha=1/n, adjust=False).mean()
def signals(df):
df = df.copy()
df["EMA9"] = ema(df.close, 9)
df["EMA50"] = ema(df.close, 50)
df["DON_H"] = df.high.rolling(50, min_periods=50).max()
df["DON_L"] = df.low. rolling(50, min_periods=50).min()
df["MACD"], df["MACDS"] = macd_lines(df.close)
df["ADX"] = adx(df)
# Long-Trend
long_pos = (df.close > df.DON_H) & (df.ADX >= 20)
exit_long = (df.close < df.EMA50) | (df.ADX < 20)
L = pd.Series(0, index=df.index)
L = np.where(long_pos, 1, np.where(exit_long, 0, np.nan))
L = pd.Series(L, index=df.index).ffill().fillna(0)
# Short-Trend
short_pos = (df.close < df.DON_L) & (df.ADX >= 20)
exit_short = (df.close > df.EMA50) | (df.ADX < 20)
S = pd.Series(0, index=df.index)
S = np.where(short_pos, -1, np.where(exit_short, 0, np.nan))
S = pd.Series(S, index=df.index).ffill().fillna(0)
# MR/Scalper (EMA9 + MACD cross)
prev_cross_up = (df.MACD.shift(1) <= df.MACDS.shift(1)) & (df.MACD > df.MACDS)
prev_cross_dn = (df.MACD.shift(1) >= df.MACDS.shift(1)) & (df.MACD < df.MACDS)
M = pd.Series(0, index=df.index)
M = np.where((df.close > df.EMA9) & prev_cross_up, 1,
np.where((df.close < df.EMA9) & prev_cross_dn, -1, np.nan))
M = pd.Series(M, index=df.index).ffill().fillna(0)
return L.astype(float), S.astype(float), M.astype(float)
Covariance
def ewma_cov(X: pd.DataFrame, halflife=20) -> pd.DataFrame:
w = np.log(2)/halflife
mu = np.zeros(X.shape[1]); S = np.zeros((X.shape[1], X.shape[1]))
for x in X.to_numpy():
d = x - mu; mu = mu + w*(x - mu); S = (1-w)*S + w*np.outer(d, d)
S = (S + S.T)/2 + 1e-8*np.eye(S.shape[0]); return pd.DataFrame(S, X.columns, X.columns)
Sleeve assembly across assets
def sleeve_returns(ohlcv_by_symbol: dict, sleeve: str, target_vol=0.25):
# ohlcv df per symbol with columns: timestamp, open, high, low, close, volume
per_sym = []
for sym, df in ohlcv_by_symbol.items():
L,S,M = signals(df)
if sleeve=="long": pos = L.clip(lower=0) # long-only
elif sleeve=="short": pos = (-S).clip(lower=0) * -1 # short-only as negative exposure
elif sleeve=="mr": pos = M
else: raise ValueError("sleeve must be long|short|mr")
ret = df.close.pct_change().fillna(0.0) * pos.shift(1).fillna(0.0)
ret_vt, _ = vol_target(ret, target_annual=target_vol)
# basic cost application example
rc = apply_costs(ret_vt, pos.shift(1).fillna(0.0), df.close, df.volume)
per_sym.append(rc.rename(sym))
R = pd.concat(per_sym, axis=1).fillna(0.0)
# equal risk across symbols via EWMA vol scaling
vol = R.ewm(halflife=20).std()
w_sym = (1.0 / vol.replace(0, np.nan)).div((1.0 / vol.replace(0, np.nan)).sum(axis=1), axis=0).fillna(0.0)
return (R * w_sym).sum(axis=1)
Conclusion
Crypto markets are noisy, regime-shifting, and cost-sensitive. Long-only momentum leaks outside expansions, short sleeves bleed in rallies, and scalpers drown in crashes. The answer is not guessing the future but classifying the present and routing risk dynamically.
Our framework does three things well:
Diversifies drivers: trend, crash, and mean-reversion engines.
Adapts to states: a regime layer that evolves from simple thresholds to adaptive, probabilistic models with L2 and ML extensions.
Controls costs: convex allocation with turnover penalties and execution models that respect liquidity.
This makes the system robust enough to survive live and flexible enough to evolve. From here, the roadmap is clear: broaden features (order-book, ML), scale sizing with conviction, add more states, and filter coin selection. That’s how we turn toy sleeves into an institutional-grade engine that keeps compounding through every tape change.
Happy trading!
Jakub
Hey Jakub, thanks for the Newsletter / Article
The formula \delta Var = \frac{1- \rho}{2} will be valid with a weight of asset w_1 = w_2 = 0.5, isn't it? Otherwise if not stated (sorry if im missing it) seems a sort of a magic. In short, VaR (p folio risk) is depending from quantity not just st dev.
Thanks!