Here is the final part of the series of posts on the volatility modelling where I will briefly talk about one of the many variant of the GARCH model: the exponential GARCH (abbreviated EGARCH). I chose this variant because it improves the GARCH model and better model some market mechanics.

In the GARCH post, I didn’t mention any of the limitation of the model as I kept them for today’s post. First of all, the GARCH model assume that only the magnitude of unanticipated excess returns determines $\sigma^2_t$. Intuitively, we can question this assumption; I, for one, would argue that not only the magnitude but also the direction of the returns affects volatility. In plain English: negative shocks (events/news, etc.) tend to impact volatility more than positive shocks. Think about the asymmetric nature of the VIX (see Bill Luby’s Vix and More).

Another limitation that may concern the quant savvy investor is the persistence of volatility shock. How long does a shock linger in the volatility estimate? Some may persist for a finite period of time while other might linger ad vidam, aeternam; effectively changing the market volatility structure. In the GARCH (1,1) model, shocks can be persistent or not, which might be undesirable depending on the situation.

These two limitations are the mains drivers behind the EGARCH model which meets these objections. Without further ado here is the EGARCH formulation:

$log(\sigma_t^2) = \omega + \sum_{k=1}^{p} \beta_kg(Z_{t-k}) + \sum_{k=1}^{q} \alpha_k log \sigma_{t-k}^2$

and

$g(Z_t) = \theta Z_t + \lambda(|Z_t| - E(|Z_t|))$

Where $\omega, \beta, \alpha, \theta,$ and $\lambda$ are coefficients, and $Z_t$ comes from a generalized error distribution.

Using this model, we can expect a better estimate the volatility for asset returns due to how the EGARCH counteracts the limitations on the classic GARCH model. In terms of implementation, as always, I recommend using statistical software to perform the analysis.

As a side note, I have to cite my sources for this post series since I mostly followed other academic papers, and I took some part textual, trying to only focus on what I thought was important for an introduction. Please refer to them if you want more information.

Nelson, D.B. (1991). Conditional heteroskedasticity in asset returns: a new approach. Econometrica,, 59(2), 347-370.

Engel, R. (2001). Garch 101: the use of arch/garch models in applied econometrics. Journal of Economic Perspectives, 15(4), 157-168.

QF

September 24, 2010 17:37

Good stuff QF,

Will you be applying EGARCH to a regime switching strategy as you did using the vanilla version? Also in that strategy I wasn’t sure if it was a long only switching strategy or long and short. So for examply low volatility forecast and 50 day SMA crosses below the 200, are you short the SPY in the trend following mode?

• September 24, 2010 18:05

Hi Proinsias,

Thank you for the comment, I might do it, actually to my knowledge the EGARCH is not implemented in R and I might have to write it myself, so it might be a bit longer, but definitely a good suggestion. Then for your question, yes I go long and short, so for your example, I would trade the golden cross short.

Cheers,
QF

November 3, 2010 21:41

I was searching for something completly different and I stumbled across a reference to EGARCH in R:
http://cran.r-project.org/doc/contrib/Farnsworth-EconometricsInR.pdf
Page 28, 5.4.3 Miscellaneous GARCH-Ox G@RCH,
has a reference to a few more exotic GARCH methods in R using the fseries package. I have not had time to verify this, but thought it was worth a mention.

• November 3, 2010 21:59

Thanks Adam, it looks very good. I’ll get a look at it.

Cheers,

QF

October 6, 2010 06:51

Hello QF,

just wondering what are your comments about other simplier volatility forecasting measures vs GARCH models -like the SAMURAI GARCHkiller published on the wilmott magazine http://www.finanzaonline.com/forum/attachment.php?attachmentid=1280759&d=1279799855

Paolo

• October 6, 2010 10:07

Hello Paolo,

Thank you for the article, it was very interesting. I think that the method is very appealing for the practitioner for its simplicity, it is definitely easier to compute an EMA than a GARCH model. But I also think that using statistical softwares lire R, makes it so easy that I don’t see why I would want to sacrifice the accuracy. All in all, interesting take on it, and good suggestion. Thank you.

Cheers
QF

October 6, 2010 18:36

QF, I would agree if GARCH models show more accuracy…the point is that this SAMURAI approach as well as other similar using EWMA – http://www.investopedia.com/articles/07/EWMA.asp – seem to provide higher accuracy as well as simplicity.

Do you think that’s correct?

Paolo

• October 7, 2010 09:39

Paolo, thank you again for the good comment. I like the simplicity of the EWMA. The conceptually both are very similar in that it takes into account historical volatility but weight recent observations with a much higher weight. However, the EWMA still takes an average, while the GARCH model use an auto-regressive model to find the statistical best weight to attribute to previous observations. The weighting scheme in EWMA is fixed, while the GARCH’s will vary depending on the value of previous observations as predictor. Furthermore, parameters beyond the lookback length in the EWMA are attributed a weigth of zero. In the GARCH model, weight gets progressively smaller, but never get to zero. Depending on what assumption you prefer one method would be a better fit for you.

Basically, I think that the GARCH is a more robust method than the EWMA. I think that EWMA accuracy might be better at times, but on the aggregate, I think that GARCH will get better results. It would be interesting to test, and maybe I will post test on different securities and assets to get a feel of it.

Cheers,
QF