Using Probability Density as an Adaptive Mechanism

I will take a pause of the Time Machine series for now while I work on it some more and prepare future posts. Today I will follow up on my post on return distributions and show a simplistic way to include it in an adaptive strategy.

It has been discussed quite a lot on the blogosphere that a strategies with fixed parameters are inferior to adaptive strategies. For example the simplest daily MR strategy I can think of is probably RSI 2 50/50, but this strategy did not always worked and I certainly don’t expect it to keep working forever. Furthermore, the most profitable lookback parameter for RSI also varies in time. This is where return distribution is useful. From it, we can derive the probability density function and use that to create an adaptive mechanism.

Just a little background on probability density function; from wiki: “density of a continuous random variable is a function that describes the relative likelihood for this random variable to occur at a given point in the observation space.” In plain language; the probability of a certain event happening. I recommend using your favorite statistical software to do so, unless you want to be doing integrals for a long time!

For this test, I took SPY data, computed RSI values for different lookback periods (2 to 30), and then looked at the results for each strategy. For a rolling period of 1 year and 6 months I looked at the probability densities of returns for every strategy looking specifically at the probability of returns greater than zero (this can be changed to a higher threshold, just want to keep it basic for this). I then traded the strategy that had the highest combination of 1 year and 6 months values. That way, the capital is allocated to the strategy with the parameter generating the highest probability of positive returns as measured by the probability density function. I compared the strategy, RSI 2 50/50 and buy and hold.


The results are not particularly impressive, the point of the article was to illustrate the concept as simply as possible. I believe that there is ways to make this particular strategy more robust; as a starter to take a shorter time frame for the lookback period to make it more sensitive to recent market data or introducing a weighting scheme to weight more recent data. I will let the reader experiment with it, I would be happy to post results if you care to share. Even though the results are not spectacular, the strategy seems to adapt to the different waves in the market and allocate the capital to a more appropriate parameter length for the RSI depending on the current market paradigm.



(Part 4) Time Machine Test – Commodities

Results on a commodities basket.

WTI Crude

Natural Gas




As you can see, the algorithm adapts to different classes of commodities and outperforms most of the buy and hold returns. But I want to emphasis that this concept alone is not something I would trade as is (not robust enough). The results are in no way good enough to rely on out-of-sample.

I found that it is much harder for the algorithm to find strategies significant enough to trade on for long periods on these assets. For equities, there is on average 7 different active strategies (i.e. the level of significance is high enough) at the same time. The number is much less with commodities and currencies, the lack of diversification between strategies certainly hurts performance when compared to equity indices. It also add on more exposure to a given strategy adding a lot of volatility to the returns as showed by the numbers above.


(Part 3) Time Machine Test – Currencies

Edit: This is a repost of the previous version of the post for currencies and commodities. When adapting my code for Datastream data, I made an error, the code was peeking which explains the ridiculously straight equity curve. Here is the corrected version of the post. I sincerely apologize to readers for the inconvenience. I also want to assure you that the previous results on equity indices are correct; I had a couple colleagues take a look at it to confirm that there were no bugs left.





First thing I notice looking at these results is difference between the usefulness of run analysis on currencies. It makes sense since currencies are related in good part to macroeconomics factors. It is also much harder for the algorithm to find significant strategies, the time in the market for the strategy is much less than with equities.. Because of this, it becomes harder to squeeze out alpha out of the strategy with a high confidence level in the analysis.


Blog Beauty Pageant

Just a quick post to tell you not to be surprised if I change the theme of my blog, or at the very least the font size, during the day, some reader have brought to my attention the difficulty to read posts. I will be experimenting with WordPress’ themes and HTML today to find a solution. It is made harder by the fact that I don’t really want to take a blog theme already used by the quant blogosphere big players (a real shame since they are pretty nice). Regardless, expect a post testing the time machine algorithm on different asset classes; commodities, currencies later today.

(Part 2) Time Machine Test – Non-parametric Statistical Filter

As promised yesterday, I tried a small change to the original “time machine” strategy first introduced by CSS Analytics. Now if you still have not, please go read these background articles on statistical filters and their importance in a trade system:

The Adaptive Time Machine: The Importance of Statistical Filters – CSS Analytics

Transactional vs Confidence-based Trading Strategies – MarketSci

In yesterday’s post, I used the student t-test approach to filter the significance of every of the 50 strategies the algorithm can choose from. As you may know, the Student’s t-distribution used to estimate the mean of a normally distributed population. Such an assumption on the distribution contradicts the kind of fat tail returns the market throws at us. To relax the normality assumption, one can use a non-parametric statistical test. Non-parametric statistics make fewer assumptions regarding the distribution of the underlying and therefore can be more robust, thus making a prime choice for the “Time Machine” algorithm. More reading on Wilcoxon signed-rank test can be found here:

For this test, I used the Wilcoxon signed-rank test instead of the Student’s t-test to establish the significance of strategies. The results below are for the strategy using a 95% significance filter on the S&P 500.

The results obtained are not very different from the previous one. It is interesting to see that the maximum drawdown is smaller when using the Wilcoxon test. This is probably caused by the increased robustness of the statistical test. For the time being I will keep testing the algorithm on different equity indices and asset class stay tuned.


Time Machine Test (Part 1)

This series of post is based on CSS Analytics’ Time-Machine post series. I recommend you read it, since I this is not my original idea. However, I do think that my implementation differs slightly from CSS Analytics’.

The following results represent backtests using my version of the algorithm on different equity indices on all free historical data available for each ticker. I used index data since the available data points go way further in history. I did test the algorithm on available ETFs with similar results. All results presented below are frictionless.

S&P 500



S&P/TSX Comp. (Can)

FTSE 100 (U.K.)




As the results above show, the algorithm is quite robust and does adapt fairly well to different market regimes as well as to the differences in market behavior for all these indices. Regardless, I think this is a very nice and simple concept that can still be improved.

I will try several modifications to the algorithm in the next few posts or so. For now you can expect tests on different asset classes: commodities, futures, currencies, etc. I also want to try to replace the t-test with a non-parametric Wilcoxon signed-rank test and see how the strategy performs when we get rid of the normality assumption when testing for significance. I also have other ideas in mind to improve the algorithm, stay tuned!


First Order Autocorrelation as a Moderator of Daily MR

In the same line of thought than my previous post on volatility as a moderator of daily MR, this post will observe first order autocorrelation. From wiki: “Autocorrelation is the cross-correlation of a signal with itself. Informally, it is the similarity between observations as a function of the time separation between them.” Basically, it is the extent to which series values are correlated with previous values (aka lagged values) in the same series. For example, the first order autocorrelation of a daily logarithmic return series is the correlation between two subsets of the return series; the series as is with a look back period and the same subset lagged 1 period.

From a trading perspective, autocorrelation is a very simple tool to incorporate in a market regime indicator, or more globally in a trading system. It is also interesting to see the evolution of autocorrelation in a given asset. The figure below shows the equity curve of the S&P 500 since 1957, the rolling Sharpe ratio of a daily MR strategy (RSI 2 50/50) and finally the first order correlation of the S&P logarithmic return using a rolling 2 years look back period.

As one would expect, first order autocorrelation can help moderate daily MR performance. When autocorrelation is positive, daily follow through is more profitable than mean reversion, around the turn of the century however, autocorrelation switched to negative territory, which is consistent with the MR predominance in profitable directional swing strategies as very well explained in the blogs on my blogroll. I plan to post a more number intensive note soon, this was just a post to introduce autocorrelation as a valuable moderator of daily MR.