(Part 2) Time Machine Test – Non-parametric Statistical Filter

As promised yesterday, I tried a small change to the original “time machine” strategy first introduced by CSS Analytics. Now if you still have not, please go read these background articles on statistical filters and their importance in a trade system:

The Adaptive Time Machine: The Importance of Statistical Filters – CSS Analytics

Transactional vs Confidence-based Trading Strategies – MarketSci

In yesterday’s post, I used the student t-test approach to filter the significance of every of the 50 strategies the algorithm can choose from. As you may know, the Student’s t-distribution used to estimate the mean of a normally distributed population. Such an assumption on the distribution contradicts the kind of fat tail returns the market throws at us. To relax the normality assumption, one can use a non-parametric statistical test. Non-parametric statistics make fewer assumptions regarding the distribution of the underlying and therefore can be more robust, thus making a prime choice for the “Time Machine” algorithm. More reading on Wilcoxon signed-rank test can be found here: http://en.wikipedia.org/wiki/Wilcoxon_signed-rank_test.

For this test, I used the Wilcoxon signed-rank test instead of the Student’s t-test to establish the significance of strategies. The results below are for the strategy using a 95% significance filter on the S&P 500.

The results obtained are not very different from the previous one. It is interesting to see that the maximum drawdown is smaller when using the Wilcoxon test. This is probably caused by the increased robustness of the statistical test. For the time being I will keep testing the algorithm on different equity indices and asset class stay tuned.

QF

Advertisements

Time Machine Test (Part 1)

This series of post is based on CSS Analytics’ Time-Machine post series. I recommend you read it, since I this is not my original idea. However, I do think that my implementation differs slightly from CSS Analytics’.

The following results represent backtests using my version of the algorithm on different equity indices on all free historical data available for each ticker. I used index data since the available data points go way further in history. I did test the algorithm on available ETFs with similar results. All results presented below are frictionless.

S&P 500

RUSSELL 2000

NASDAQ 100

S&P/TSX Comp. (Can)

FTSE 100 (U.K.)

NIKKEI 225 (JAP)

HANG SENG (CHN)

Summary

As the results above show, the algorithm is quite robust and does adapt fairly well to different market regimes as well as to the differences in market behavior for all these indices. Regardless, I think this is a very nice and simple concept that can still be improved.

I will try several modifications to the algorithm in the next few posts or so. For now you can expect tests on different asset classes: commodities, futures, currencies, etc. I also want to try to replace the t-test with a non-parametric Wilcoxon signed-rank test and see how the strategy performs when we get rid of the normality assumption when testing for significance. I also have other ideas in mind to improve the algorithm, stay tuned!

QF

First Order Autocorrelation as a Moderator of Daily MR

In the same line of thought than my previous post on volatility as a moderator of daily MR, this post will observe first order autocorrelation. From wiki: “Autocorrelation is the cross-correlation of a signal with itself. Informally, it is the similarity between observations as a function of the time separation between them.” Basically, it is the extent to which series values are correlated with previous values (aka lagged values) in the same series. For example, the first order autocorrelation of a daily logarithmic return series is the correlation between two subsets of the return series; the series as is with a look back period and the same subset lagged 1 period.

From a trading perspective, autocorrelation is a very simple tool to incorporate in a market regime indicator, or more globally in a trading system. It is also interesting to see the evolution of autocorrelation in a given asset. The figure below shows the equity curve of the S&P 500 since 1957, the rolling Sharpe ratio of a daily MR strategy (RSI 2 50/50) and finally the first order correlation of the S&P logarithmic return using a rolling 2 years look back period.

As one would expect, first order autocorrelation can help moderate daily MR performance. When autocorrelation is positive, daily follow through is more profitable than mean reversion, around the turn of the century however, autocorrelation switched to negative territory, which is consistent with the MR predominance in profitable directional swing strategies as very well explained in the blogs on my blogroll. I plan to post a more number intensive note soon, this was just a post to introduce autocorrelation as a valuable moderator of daily MR.

QF

Different Volatility Measures Effect on Daily MR

Daily swing trading strategies these days are usually inclined towards MR rather than trend FT. While there is a lot of factor that moderate a daily MR strategy, one of particular interest is volatility. Usually, higher volatility is favorable for short-term strategies (think RSI 2). For this bit of research I tried several volatility formulas on several time frames and compared the effect on RSI 2 returns.

Methodology: I Applied RSI 2 strategy to SPY’s returns from 2000 then classified volatility in percentile. I used three different methods to compute volatility figures, the classic standard deviation, the Garman Klass – Yang Zhang and the Yang Zhang method. The following formulas explain how I came up with the figures; I used the volatility function of the TTR package in R:

Garman Klass – Yang Zhang method:

Yang Zhang method:

Where:

*The GK-YZ method allows for opening gaps while the YZ method is independent of both drift and opening gaps.

The table below shows average trade result and winning percentage by percentile for monthly (21 days) and annual (252 days) time frames. Note that these time frames were arbitrarily chosen and a future post will likely expand on this with different time frames.

At first glance, it seems that the added computation complexity does not significantly improve our system accuracy. On the monthly time frame though, the system average trade returns in the lower percentile was higher when mitigated by the YZ indicator. Regardless, the rest of the results were not significant enough for me to pronounce a volatility indicator better suited for a MR system. Traders might want to KISS and stay with the good old classic standard deviation.

QF

The Importance of Return Distributions

When designing a strategy, I like to observe the probability distribution of the asset I plan to trade. It yields precious information on the behavior of the underlying and can also help identify the market regime in effect for a given period.

Of course eyeballing the probability density curve or the empirical cumulative distribution can work, but from a quantitative trading point of view, it does not really help; we want something more mechanical that we can rely on over and over. The answer is quite simple; simple curve analysis can give us the mechanical capability we are looking for. Data on the mean, median, skew, kurtosis, etc. of a distribution can all be fed as parameters to be analyzed in a trading system.

We don’t have to use this technique uniquely on raw return data, it is often helpful to know how a given indicator or strategy affect the return distribution. A comparison between the raw return distribution and one processed with signals from a strategy can help indicate whether the strategy is traded with the right bias for the current market regime or if a strategy is suddenly diminishing in profitability. Furthermore, for the daily swing traders amongst us, probability densities analysis can help us see if our indicators really help mitigate daily MR.

Finally, distribution analysis can be an extremely valuable tool when developing adaptive strategies. It provides a strategy with instant feedback on its performance and its effect on the return series of the underlying; taking that in consideration can be a good base to build on when designing your own adaptive algorithms (more on that later..).

QF

Shout-Outs

I have no qualms about accepting a useful idea merely because it wasn’t my own.” ~ Thrawn

In this short post, I want to acknowledge the blogs and people (in no particular order) who motivated me to start the Quantum Financier blog. I am a big fan of their work and of their blog, so a big shout-out to you gentlemen.

David Varadi of CSS Analytics

Michael Stokes of MarketSci

Max Dama of Max Dama on Automated Trading

Joshua Ulrich of FOSS Trading

Rob Hanna of Quantifiable Edges

and finally the Quantivity blog

QF

Welcome to the Quantum Financier Blog

This blog will contain the results of my research and the things I am currently working on. For now,  I expect my posting to focus on applications of machine learning, mathematics and quantitative methods to develop automated trading strategies. My ultimate goal is the creation of adaptive investment strategies that “think” and “analyze” the market and evolve accordingly.

When reading my posts, keep in mind that I am only a student and that my goal is to learn as I post, I will make every effort not to make mistakes when posting but it might happen. Regardless, I look forward to receive feedback and comments on the content of this blog and to discuss the ideas posted.

QF