Be Careful What You Wish For

It is in the human nature to seek the path of least resistance. While this might be good in some instances, when dealing with my capital I usually try to keep it simple but I try to always steer clear from intellectual laziness.

Many top tier bloggers have mentioned the traps of assumptions and the limitations of parametric statistics when dealing with market data. A good thing to do before using a certain method or model is always to do a little research on the underlying assumptions. If they don’t fit our data, then we know we have to be more careful, however they can still be quite useful, do not automatically disregard a method or model when your assumptions aren’t met.

For example, consider the great workhorse of econometrics: the least square model. It is widely used in academia. It is actually quite hard to find a finance paper without the mention of regression in a certain way or form. That’s just what we do, we like to try and model phenomenon in simple and elegant ways. It is often used in its simplest form; the ordinary least square model that you of you may know as the linear regression. I am sure that most of the readers of this blog used it before in some fashion. I also think that some may have used it without really paying attention to some of its assumptions.

1. Population regression function is linear in parameters
2. The independent variable and the errors are independent: cov(x_i, \varepsilon_i) = 0
3. Homoscedasticity (ie. constant variance) of the errors
4. No autocorrelation: cov(\varepsilon_t, \varepsilon_{t-1}) = 0
5. The regression model is correctly specified, all relevant variables are included
6. The error is normally distributed

Now with this in mind, we see how the ols has some assumptions that we would need to address before we blindly apply it. The big two for financial time series are number 3 and 4. See the post series on GARCH modeling for a more specific discussion on the matter here.

The point here is not to invalidate the least squares method at all; I use it frequently. The point is to show that sometimes assumptions can be really restrictive and need to be considered regardless of what method or model you want to use, and also remember that sometimes, the path of least resistance in trading is not always the best. A good habit when stumbling upon a new promising tool for your trader’s toolbox is to dig a bit more and understand the underlying process and the assumptions you make every time you use it. It is also a nice plug for the non-parametric and non-linear statistical methods, who usually tends to have looser base assumptions.


About these ads

12 responses to “Be Careful What You Wish For

  1. It’s s always good to keep in mind that least squares estimators are unbiased, consistent and assymptotically normally distributed, even without assumptions 6, 3 and 4. I think the most important assumption of the linear regression model for use in finance is that of constant regression coefficients; I personally like to favor models with regime-switching or time-varying coefficients. These, however, are not so easily estimated. I think a good rule of thumb to test a trading idea, for instance, is that if if doesn’t work with simple OLS or OLS with a rolling window, then it is unlikely that it will work with another, more involved, regression model.

  2. Adding to the above comment, serial correlation is easily dealt with by adding autoregressive terms to your regression. Heteroscedasticity on the other hand is extremely difficult to deal with.

    Another problem that should be mentioned, is the bias introduced by outlier data. In practice I don’t use standard OLS as it stands, rather use a robustification procedure on the covariance X’X before applying to inv(X’X) X’Y. Basically one wants to determine a representative data set without the outliers.

    Finally, you mention linear in parameters, however, to clarify would say is solves a linear system with respect to the design matrix. It does mean that one can solve / fit polynomial and other expressions of the form: a f(.) + b g(.) + c h(.), where for polynomials the functions or columns in the design matrix are the squares, cubes, etc. of the data.

    • Hey stat arb,

      I actually didn’t but some people have strongly recommended it to me. I will have to wrap my head around it soon. Thank you for the suggestion.



  3. How does one test for constant variance? Does one calculate variance by quartile or decile or something?

    • Good question,

      Visually, you can look at plots of residuals versus time and residuals versus predicted value, and be alert for evidence of residuals that are getting larger (i.e., more spread-out) either as a function of time or as a function of the predicted value. (To be really thorough, you might also want to plot residuals versus some of the independent variables.)

      Statistically, discontinuities can be detected using structural change methods (Quandt, Chow, etc.). Quantile regression may also be effective for evaluating non-time continuity / stability of linear models.



  4. QF, sure thing. Just read the first chapter for the basic idea. You can do that in Amazon Preview.

    There’s also an EconTalk with Ed Leamer that relates to it from the sidelines. If you do the podcast thing.

    Basically it’s just saying that you can use regressions for common sense things and telling you where the standard problems you outline above don’t matter.

    Another something to check out is Data Analysis for Public Policy by E Tufte. There’s a chapter in there where he gives the essentially the following example:

    You have data from Quadrants II, III, and IV. You therefore have seen the interaction between X_1 and X_2, correct?

    no! If you try to project into Quadrant I, you haven’t seen that kind of story before.

    But the regression can tell you the slope of points in Quad III for example.