S&P 500 Sectors Analysis

To follow up with last post, and also nicely tying into Engineering Returns’ recent sector rotational system I will show a factor decomposition of the S&P 500 from different sectors. In essence, you can think of it as a multifactor analysis where I try to determine what sectors are relevant for a given period in predicting the S&P 500. This kind of analysis is important since a lot of the point and click trading in proprietary firms is done based of price action and correlation across markets. It is the later part I address today.

I looked at the period going from Jan. 01 2008 to today in both analyses. When using linear least squares we perform the regression on the next day SPY returns using the current return of our sector ETFs (I used the SPDRs) as independent variables. Results are below:

What we see from the table is that the consumer discretionary, financials, materials and technology sectors have statistically significant. Additionally, the financials and technology sectors seem to be precursors of reversals as indicated by their negative coefficients. The opposite holds true with the consumer discretionary and materials sector who seem to not be so contrarian. This simple analysis seems to indicate that these four sectors are the best candidates to consider when looking into cross markets relationships.

Another aspect of trading where factor analysis can come into play is related to diversification benefits. Using principal component analysis we extract the co-movements of our sector ETFs and SPY. Similarly, we could use the same type of method to extract the co-movements across country indices, or some other discretionary breakdown. Looking at the factor loading for the first three (explain about 95% of the covariance matrix) principal components for each ETF, we can get a better feel of the value of allocating capital to an ETF versus another and puts the illusion of diversification in perspective. In our days where correlation across asset classes is high, people trading TAA systems have all the advantages to use such analysis in their asset allocation and weighting. The factor loadings are below:

Meric et al. (2006) say it best: “The sectors with high factor loadings in the same principal component move closely together making them poor candidates for diversification benefits. The higher the factor loading of a sector in a principal component, the higher its correlation is with the other sectors with high factor loadings in the same principal component. The sector indexes with high factor loadings in different principal components do not move closely together and they can provide greater diversification benefit. Some sector indexes may have high factor loadings in more than one principal component. It indicates that the movements of these sectors have some similarity to the movements of sectors with high factor loadings in more than one principal component. These sectors are not good prospects for successful sector portfolio diversification.” In depth interpretation of the table is left to the curious reader.

In conclusion, factor decomposition/analysis can be very useful for traders to get a feel of their traded instrument of predilection. Be it for establishing significant lead/lag relationship or diversification caveats, both least squares and PCA can be valuable tools. Interested readers can also look at Granger causality for similar avenues.

QF

Predictor(s)/Factor(s) Selection

Before getting in the post main subject, I want to mention a couple things. First, the TAA system talked about before on the blog is still in the works, a busy schedule and work commitment made it difficult for me to polish and publish it as soon as I wanted. Secondly, I apologize for the New Year’s hiatus; I will be back to regular posting in the coming days.

Back to the subject at hand; predictor selection. For usual readers of the blog, machine learning will not strike as an unusual topic for the blog. I really enjoy using statistical learning models for my research and trading, and talked about it a fair bit in earlier posts. However, reviewing the blog history recently, I noticed that I overlooked this very important topic.

The same way we must be careful what predictor(s) we use when using linear regression or, for that matter, any forecasting model we need to pay close attention to our input when using learning models (GIGO: garbage in = garbage out).

Using a “kitchen sink” approach and using as many predictors as one can think of is generally not the way to go. A large number of predictors often bring a high level of noise which usually makes it very hard to identify reliable patterns in the data for the models. On the other hand, with very few predictors, it is unlikely that there will be a lot of high probability patterns to profit from. In theory we are trying to minimize the number of predictors for a given accuracy level.

In practice, we usually go from what the theory suggests or discretionary observation to determine what should be included. The quantification of this process is often called Exploratory Factor Analysis (EFA); basically looking at the covariance matrix of our predictors and perform an eigenvalue decomposition. Doing this we aim to determine the number of factors influencing our response (dependant) variable, and the strength of this influence. This technique is useful to better understand the relationships between a predictor and the response variable and determine what predictors are most important when classifying. This type of covariance matrix decomposition is supposed to help us refine our predictor selection and hopefully improve classification performance; this process will be the object of the next few posts.

QF