Predictor(s)/Factor(s) Selection

Before getting in the post main subject, I want to mention a couple things. First, the TAA system talked about before on the blog is still in the works, a busy schedule and work commitment made it difficult for me to polish and publish it as soon as I wanted. Secondly, I apologize for the New Year’s hiatus; I will be back to regular posting in the coming days.

Back to the subject at hand; predictor selection. For usual readers of the blog, machine learning will not strike as an unusual topic for the blog. I really enjoy using statistical learning models for my research and trading, and talked about it a fair bit in earlier posts. However, reviewing the blog history recently, I noticed that I overlooked this very important topic.

The same way we must be careful what predictor(s) we use when using linear regression or, for that matter, any forecasting model we need to pay close attention to our input when using learning models (GIGO: garbage in = garbage out).

Using a “kitchen sink” approach and using as many predictors as one can think of is generally not the way to go. A large number of predictors often bring a high level of noise which usually makes it very hard to identify reliable patterns in the data for the models. On the other hand, with very few predictors, it is unlikely that there will be a lot of high probability patterns to profit from. In theory we are trying to minimize the number of predictors for a given accuracy level.

In practice, we usually go from what the theory suggests or discretionary observation to determine what should be included. The quantification of this process is often called Exploratory Factor Analysis (EFA); basically looking at the covariance matrix of our predictors and perform an eigenvalue decomposition. Doing this we aim to determine the number of factors influencing our response (dependant) variable, and the strength of this influence. This technique is useful to better understand the relationships between a predictor and the response variable and determine what predictors are most important when classifying. This type of covariance matrix decomposition is supposed to help us refine our predictor selection and hopefully improve classification performance; this process will be the object of the next few posts.

QF

4 thoughts on “Predictor(s)/Factor(s) Selection”

  1. Nice post on something I’ve thought about for a while. From my experience you may improve the quality of predictive variables with considerations of the role they play on the market microstructure level (e.g. bid/offer execution volumes wrt time or price levels, etc.) than pursuing in entirely quantitative means.

    1. Hey Rocko,

      This is a tentative, I have thought about it for some time and I think it could help. I also think you are right in your comment, what I am looking to do is to determine a very broad basket of potential predictors based on my intuitions and perform the analysis to determine the best to use for a given period. I am not quite sure what will happen yet though!

      Best,

      QF

  2. Hi QF,

    I assume what you are talking about is using PCA to find the main components and then train on these?
    From my (basic) understanding of PCA, when you project into the eigen space you remove the variation that is of any higher than second order, which would be fine under an assumption of normality. So you might be removing the very information that you want to capture. If I have time I will try to find the lecture where I saw this.

    Anyway, I look forward to see what you come up with.

Comments are closed.