One Size Does Not Fit All

In our eternal search for the trading Holy Grail it is often tempting to try and find the “ultimate” signal (indicator) and apply it to as many instruments we can. This single solution approach for the most part fails miserably, think of a carpenter with only a hammer in his (or her; QF is an advocate for equal opportunity) toolbox. While making signals adaptive is definitely an improvement however, I think we sometimes miss the point.

Instead of harassing and optimizing a signal ad absurdum to improve the backtest, one would be better served by looking at the big picture. One signal in itself only contains so much information. While there are a lot of good indicators that perform very well by themselves available (the blogosphere is a really rich ecosystem in that regard); their power is only magnified when combined with other signals containing different information. The secret is in the signal aggregation. In other words, in how we form and use an ensemble of signals isolating different pieces of information to build a profitable strategy (note the use of the word strategy as opposed to system, careful wordsmithing aside, the difference is paramount). This is a topic I have been taking a close look at recently and I think the blogosphere is a perfect tribune to share some of my findings. As a starter, here are some points I will be touching on in the upcoming series of post.
1. What are the basic intuitions behind ensembles and why can they help in building trading strategies?

2. How do we isolate and quantify specific pieces of information and then observe their effect on the instruments we trade.

3. How to we evaluate current pertinence of the signals.

4. Finally, how do we aggregate all the useful information and build a strategy from the ground up.

The mechanics are going to be explained using a simplified example for readers to follow along but the intuition will be the same that the one behind the first QF strategy to be tracked real-time on the blog. I still don’t have a fancy name for it but it’ll get one for its official launch.


Update, Milestone, and Unfinished Business

First of all let me apologize for being off the grid for so long and not providing you with any of my geek prose recently. My final university semester classes are coming to an end today and I will resume regular posting after finals. However rest assured that my absence of the blogosphere has not caused quant power atrophy, it was merely a by-product of how busy I was with school and interviewing. Thank you all for sticking up with me during this dry spell.

I recently obtained 50,000 views, while not an incredibly big number compared to the blogosphere’s behemoths (see blogroll), I personally am really happy. I never really thought that this blog would take these proportions and it keeps surprising me by bringing opportunities I never thought would be possible, and for that I must thank you.

Unfinished Business
I know some of you were eagerly waiting for the TAA system post I kept saying would be coming soon; I am sorry to tell you that it will not. My services were hired by a private firm and the intellectual property developed is protected by a non-disclosure agreement. I might however discuss the intuition at a high level if the interest is still present.

Finally some of you might have noticed the new “QF Strategies” tab up top. For now it shines by its emptiness but it will not for much longer. I will be tracking strategies there soon, bear with me.


Model Scalability

When designing a model, an aspect that I often overlook is scalability. First a definition from Investopedia: “A characteristic of a system, model or function that describes its capability to cope and perform under an increased or expanding workload. A system that scales well will be able to maintain or even increase its level of performance or efficiency when tested by larger operational demands.”

Now most of you probably wonder why I would overlook such a crucial aspect of model building. The reason is very simple; I never had to. Most of the models I design are for my personal trading and since I don’t have millions of dollar in capital to trade (yet!), the scalability requirements are very insignificant. Most of my trading is on the mini futures and I only trade a few contracts per symbol. Keeping this in mind, I don’t have to worry too much about slippage when I place my orders since the effect of the order book are negligible. However, chances are I am not going to design models solely for my personal trading during my career.

The scalability requirement for a hedge fund for example is however, very different. Imagine trading a high-turnover strategy on a single symbol, for the sake of example, consider a RSI2 strategy. It is a very short term strategy that has a relatively high turnover for an end-of-day strategy. Now trading this strategy with 50k is feasible (not optimal but not too bad), now think about trading the RSI2 signal on a single symbol with 100mm; very impractical. Think how much slippage would affect the strategy. At time of writing, the SPY opened at 133.02, trading 100mm would end up being about 751,654 shares at open quote. Admittedly, I don’t have the exact number, but I doubt that the order book opened almost a million deep at the ask and therefore we would expect some slippage effect. Presumably, it would be quite significant and would significantly change our expectation of return for our RSI2 strategy.

Now I know that few hedge funds would trade a RSI2 on the SPY alone, it is only a conceptual example to support understanding, and I want to point out that I am not saying that RSI2 is non-scalable (nor that it isn’t). To evaluate scalability, we need robust backtesting clearly estimating the impact of the order book, latency (if intraday), and other relevant factors on the returns. Another angle to consider is scalability across assets, following our RSI2 example, if we allocate 50% to both the SPY and the QQQQ, in theory we reduce the weighted impact of slippage and other transaction costs on a given symbol (ie. the marginal transaction cost per symbol is decreasing when we diversify across assets). However, that effect is not necessary a linear one as nicely explained by Joshua in the comment section below.

Other avenues to consider in scalability are left to the interested reader who can always contact me via email if they desire since I have recently paid closer attention to the issue myself. Furthermore, for readers with similar career desires to mine, remember that models scalability is directly related to employability!


S&P 500 Sectors Analysis

To follow up with last post, and also nicely tying into Engineering Returns’ recent sector rotational system I will show a factor decomposition of the S&P 500 from different sectors. In essence, you can think of it as a multifactor analysis where I try to determine what sectors are relevant for a given period in predicting the S&P 500. This kind of analysis is important since a lot of the point and click trading in proprietary firms is done based of price action and correlation across markets. It is the later part I address today.

I looked at the period going from Jan. 01 2008 to today in both analyses. When using linear least squares we perform the regression on the next day SPY returns using the current return of our sector ETFs (I used the SPDRs) as independent variables. Results are below:

What we see from the table is that the consumer discretionary, financials, materials and technology sectors have statistically significant. Additionally, the financials and technology sectors seem to be precursors of reversals as indicated by their negative coefficients. The opposite holds true with the consumer discretionary and materials sector who seem to not be so contrarian. This simple analysis seems to indicate that these four sectors are the best candidates to consider when looking into cross markets relationships.

Another aspect of trading where factor analysis can come into play is related to diversification benefits. Using principal component analysis we extract the co-movements of our sector ETFs and SPY. Similarly, we could use the same type of method to extract the co-movements across country indices, or some other discretionary breakdown. Looking at the factor loading for the first three (explain about 95% of the covariance matrix) principal components for each ETF, we can get a better feel of the value of allocating capital to an ETF versus another and puts the illusion of diversification in perspective. In our days where correlation across asset classes is high, people trading TAA systems have all the advantages to use such analysis in their asset allocation and weighting. The factor loadings are below:

Meric et al. (2006) say it best: “The sectors with high factor loadings in the same principal component move closely together making them poor candidates for diversification benefits. The higher the factor loading of a sector in a principal component, the higher its correlation is with the other sectors with high factor loadings in the same principal component. The sector indexes with high factor loadings in different principal components do not move closely together and they can provide greater diversification benefit. Some sector indexes may have high factor loadings in more than one principal component. It indicates that the movements of these sectors have some similarity to the movements of sectors with high factor loadings in more than one principal component. These sectors are not good prospects for successful sector portfolio diversification.” In depth interpretation of the table is left to the curious reader.

In conclusion, factor decomposition/analysis can be very useful for traders to get a feel of their traded instrument of predilection. Be it for establishing significant lead/lag relationship or diversification caveats, both least squares and PCA can be valuable tools. Interested readers can also look at Granger causality for similar avenues.


Predictor(s)/Factor(s) Selection

Before getting in the post main subject, I want to mention a couple things. First, the TAA system talked about before on the blog is still in the works, a busy schedule and work commitment made it difficult for me to polish and publish it as soon as I wanted. Secondly, I apologize for the New Year’s hiatus; I will be back to regular posting in the coming days.

Back to the subject at hand; predictor selection. For usual readers of the blog, machine learning will not strike as an unusual topic for the blog. I really enjoy using statistical learning models for my research and trading, and talked about it a fair bit in earlier posts. However, reviewing the blog history recently, I noticed that I overlooked this very important topic.

The same way we must be careful what predictor(s) we use when using linear regression or, for that matter, any forecasting model we need to pay close attention to our input when using learning models (GIGO: garbage in = garbage out).

Using a “kitchen sink” approach and using as many predictors as one can think of is generally not the way to go. A large number of predictors often bring a high level of noise which usually makes it very hard to identify reliable patterns in the data for the models. On the other hand, with very few predictors, it is unlikely that there will be a lot of high probability patterns to profit from. In theory we are trying to minimize the number of predictors for a given accuracy level.

In practice, we usually go from what the theory suggests or discretionary observation to determine what should be included. The quantification of this process is often called Exploratory Factor Analysis (EFA); basically looking at the covariance matrix of our predictors and perform an eigenvalue decomposition. Doing this we aim to determine the number of factors influencing our response (dependant) variable, and the strength of this influence. This technique is useful to better understand the relationships between a predictor and the response variable and determine what predictors are most important when classifying. This type of covariance matrix decomposition is supposed to help us refine our predictor selection and hopefully improve classification performance; this process will be the object of the next few posts.


Season’s Greetings

During this holiday period, I want to thank you all very much for the support you have showed this nascent blog in 2010. The good discussions I have had with reader in the comment section or via email made every minute of the adventure worth it. The good feedback I received exceeded every expectations I ever had for the blog and I hope that 2011 will be just as prosperous for the Quantum Financier blog.

Now to you that take time during this time of festivities to read this post, I wish the best for the holiday period and a very successful year trading and evolving with the markets in 2011.



Market Rewind’s Sentiment Spreads Remix

In this post, I want to build on a good discussion in the comment section of this post: An Intermarket Ensemble Model for the S&P500 Using Market Rewind’s “Sentiment Spreads”. One of ETF Prophet’s contributors and the initial thinker for the spreads, Mr. Pietsch commented on CSS’s use of the sentiment spreads to predict the S&P 500. The part that spiked my interest was the very last question: what can you tell us about improving performance yet again with weak learner methodologies? While originally directed to Mr. Varadi, I felt compelled to try the idea.

The setup layed out in the post linked above is a good candidate for a support vector machine prediction model. You can think of the process as mapping the information contained in the spreads in relation to the ETF tracking the S&P 500. This approach, a weak form of ensemble learning is the next level after single indicator/variable model. Conceptually, one can think of market movements as the aggregation of a high (very) quantity of information and data. Trying to process the market and generate signals using a single raw filter/signal/strategy has a small chance to succeed for a long time without adaptation. Using multi-variables adaptive models/strategies tends to increase the odds in our favour. I would argue that two of the most important variables to take in consideration are price action and inter market effects. Most technical indicator covers the price action angle. These sentiment spreads are a simple way to introduce the other very important part into a quantitative strategy. For now, I will only take these into consideration, but I plan to expand on this and show how we can combine price action and intermarket effects to our best advantage. The following results are from using a \nu-svm classification model trained on the last 100 days of spread. Note that the sample is very small since I wanted to take all seven spreads together, it will be interesting to see how it plays out with more data to our disposition and to test it using intraday data.


Market Neutral Strategy Portfolio

In continuation with last post on market neutrality, this post will look into obtaining market neutrality for a portfolio of strategy. As mentioned in last post, investors can trade many strategies with conflicting signals. For this particular example, imagine an investor that trades two strategies; a RSI2 and the 50-200 moving average crossover on the S&P 500. You can imagine that the signals from these strategies are going to be conflicting. The RSI2 is a really short-term strategy with a high turnover, while the so termed “golden cross” is a polar opposite very slow moving low frequency strategy. Also assume this investor allocate 50% of his capital to both strategy.

Now in this particular experiment, we have that portfolio of strategies and want to short or long some SPY in order to remain market neutral. To accomplish this we will use the commonly used simple linear regression and the slightly more robust quantile regression.

Simply put, we are simply using the portfolio (dependant) and market (independent) returns series to obtain the prescribed hedge ratio approximated by the regression coefficient. Now everyone is likely familiar with the linear regression so I will skip the intro. However the quantile regression is not as mainstream. It simply estimates the tau(\tau)-th conditional quantile regression function, and give the expected value based on the current quantile (tau) of the predictor variables. The premise for using quantile regression is that we expect the regression coefficients to vary depending on the level of the predictor variable. Note that in this experiment I used the linear quantile regression but there exists a non-linear version of it.

The results below are obtained using a 60 days lookback period for the hedge ratio calculation. They represent the equity curves of the portfolio traded with the market neutral long/short hedge as prescribed by the regression coefficient. A note of caution here, doing this will reduce returns however, it also reduces volatility.

I think this result is interesting in a couple of ways. But first I want to look if market neutrality is effectively obtained. As explained in last post, we often say we are market neutral if our beta is zero. Readers that have experimented with this concept know that the beta is never going to be strictly zero, but will oscillate around it. To test whether or not my goal was accomplished I used a simple t-test for the 60 day rolling mean of the observed beta. Alternatively, I could have used a more robust bootstrap test but I wanted to keep things simple. The t-test confirmed that the observed beta was effectively not different from zero and that we obtained theoretical market neutrality with both methods.

To conclude, I was happy the result confirmed my expectation with quantile regression which tested slightly better. I think there is value in the quantile regression when considering market neutrality as an objective for a strategy or a portfolio of strategies. The dynamic linear modelling approach is left to the interested reader.


Market Neutrality

Market neutrality is one of those buzz words thrown around quite a lot in finance; several hedge funds claim their strategies as being market neutral and use it as their main marketing tool. Some quantitative strategies are also oriented towards that goal; pairs trading is a prime example, but one can also include segments of statistical arbitrage in that broad area.

This begs the question why market neutral? To answer this question we must first discuss what market neutrality means. Consider the daily return for a stock i denoted R_i. We can decompose the returns between the market related (systematic) portion F and the stock specific (idiosyncratic) portion \Theta, yielding the following equation:

R_i = \beta_i F + \Theta_i

Which is nothing more than an ordinary least squares regression model decomposing the return of stock i into a systematic component \beta_i F and an idiosyncratic (uncorrelated) component \Theta_i. The market neutrality is obtained by eliminating the systematic portion of the returns, equivalent to say:

\beta_i F = 0


R_i = \Theta_i

Effectively, getting rid of the market exposure and only exposing ourselves to the portion of the return based on stock i specific profile, hence market neutrality. Now back to the initial question: why market neutral? Simply put; we want to make a bet on a security without at the same time betting on the direction of the market. In a relative value strategy like pairs trading where we are betting on the outperformance of securities relative to each other, regardless of where the market goes, market neutrality takes all its sense.

However market neutrality is not only considered in relative value strategies. Imagine an investor trading a portfolio of strategies. The market exposure of this particular investor can be thought as the capital weighted average of the individual strategy betas:

\beta_p = \frac {Q_j}{\sum Q_j} \beta_j

Where Q_j is the dollar amount invested in strategy j.

Keeping in mind the first equation we can also decompose the return of the portfolio in a similar fashion, composed of a systematic and idiosyncratic (strategy ensemble specific) component. In an attempt to obtain market neutrality, one could short (buy) market futures or the corresponding ETF in order to satisfy the second equation, effectively neutralizing the portfolio returns’ exposure to the market.

While this approach does not necessarily improve returns, it has the benefit of potentially better sheltering one against market storms by reducing exposure. Targeting a market neutral approach also has the benefit producing uncorrelated returns. A recent post by Marketsci explain that most investors don’t seem to look for absolute returns, but if you find yourselves in the category that would prefer absolute to relative returns, taking a look at market neutrality may be worth your time. I personally like market neutral strategies and if interest warrants, I could dive deeper into different techniques to obtain market neutrality that I find more reliable than ordinary least squares, like quantile regression.


Why I do Things This Way

I must confess a few things. I started my journey in the investment world as a self-proclaimed value investor. I didn’t know any better and I figured; if it worked for Warren Buffet, it ought to work for me. So I read and read on the subject and a little later I was being introduced to financial theory in school; time vale of money, benefits of diversification and all that jazz. At that point I felt like the planets aligned, making money and the market was easy, we only had to consider companies as a series of future cash flows. I then learned to do fundamental valuation: discounted cash flow models, comparables analysis, financial ratios regression et. al. However it never really did it for me, I was always left with questions unanswered.

Looking for other more attractive venues for me, I was always hearing tales of those mythical investors that could predict the future with a single look at a chart. Looking forward to gain this level of perception, I started looking into visual chart analysis. At first I must say I was baffled by what appeared to be doodles on the chart. I remember that at some point early on someone was trying to persuade me that if my chart was forming a tea cup I found myself a pattern to trade on. I must admit I was perplexed. Nonetheless I stuck through and passed the stage of chart reader and graduated to the indicator stage. Then things started to look more appealing to me, I particularly liked how each indicator would put a specific aspect of a stock price series to the foreground and reducing the noise. However I couldn’t seem to find a way to use these indicators to develop a way to make money. Decidedly reaching the $1M mark before 25 years old was going to be more complicated than I had forecasted when I was younger.

Then one day I came across the blogs on my blogroll; I was hooked. The method used in these blogs just made all the puzzle pieces fit together. They wouldn’t discredit any method per se but would question the methods, the underlying assumptions, and would use an outside the box thinking approach to answer question left unanswered. Rather than being strictly technical or fundamental, they would use quantitative methods to analyse the market and rigorously evaluate phenomenons. This no fad, down to earth and based on the scientific approach was exactly was I was looking for all this time without knowing it. While I am nowhere near the $1M mark, I have grown from an absolute approach trader to a seeing the shades of grey. Instead of looking for the Holy Grail strategy or approach, I now strive to constantly get better and get answers to my questions.

If you are a reader of my blog, I also assume you follow these blogs, and the one true great thing I hope you get away from it is not that new strategy published that scores a 40% annualized return in the backtest, or that awesome new indicator that outperforms this and that. Above all else, I hope that what you get from our blogosphere community is the desire to investigate and to constantly improve your trading. And that my friends, is the only way to succeed; and no fundamental or technical school of thoughts will ever give you that if you just blindly follow it without questioning the underlying principles.