Skip to content

Give me good data, or give me death

January 10, 2016
A good discussion not to long ago led me to start a revolution against some data management aspects of my technology stack. Indeed it is one of the areas where the decisions made will impact every project undertaken down the road. Time is one of our most valuable resources and we need to minimize the amount of it we have to spend dealing with data issues. Messy and/or hard to use data is the greatest drag I have encountered when trying to produce research.I had to keep a couple things in mind when deciding on a solution. First, I knew I did not want to depend on any database software. I also knew that I would not be the only one using that data and that although I use Python, other potential users still don’t know better and use R. The ideal solution would be as close to language agnostic as possible. Furthermore, I wanted a solution stable enough that I did not have to worry too much about backward compatibility in case of future upgrade.

With those guidelines in mind, I could start to outline what the process would look like:

  1. Fetch data from vendor (csv form)
  2. Clean the data
  3. Write the data on disk

The biggest decision I had to make at this stage was the format used to store the data. Based on the requirements listed above, I shortlisted a few formats that I thought would fit my purpose: csv, json, hdf5, and msgpack.

At this stage I wanted to get a feel for the performance of each of the options. In order to do that I created a simple dataset of 1M millisecond bars so 4M observations.

In [1]:
import pandas as pd
import numpy as np

#create sizable dataset
n_obs = 1000000
idx = pd.date_range('2015-01-01', periods=n_obs, freq='L')
df = pd.DataFrame(np.random.randn(n_obs,4), index=idx, 
                  columns=["Open", "High", "Low", "Close"])
Open High Low Close
2015-01-01 00:00:00.000 -0.317677 -0.535562 -0.506776 1.545908
2015-01-01 00:00:00.001 1.370362 1.549984 -0.720097 -0.653726
2015-01-01 00:00:00.002 0.109728 0.242318 1.375126 -0.509934
2015-01-01 00:00:00.003 0.661626 0.861293 -0.322655 -0.207168
2015-01-01 00:00:00.004 -0.587584 -0.980942 0.132920 0.963745
Let’s now see how they perform for writing.
In [2]:
%timeit df.to_csv("csv_format")
1 loops, best of 3: 8.34 s per loop
In [3]:
%timeit df.to_json("json_format")
1 loops, best of 3: 606 ms per loop
In [4]:
%timeit df.to_hdf("hdf_format", "df", mode="w")
1 loops, best of 3: 102 ms per loop
In [5]:
%timeit df.to_msgpack("msgpack_format")
10 loops, best of 3: 143 ms per loop
And finally let’s have a look at their read performance.
In [11]:
%timeit pd.read_csv("csv_format")
1 loops, best of 3: 971 ms per loop
In [10]:
%timeit pd.read_json("json_format")
1 loops, best of 3: 6.05 s per loop
In [8]:
%timeit pd.read_hdf("hdf_format", "df")
100 loops, best of 3: 11.3 ms per loop
In [9]:
%timeit pd.read_msgpack("msgpack_format")
10 loops, best of 3: 33.1 ms per loop
Based on that quick and dirty analysis HDF seems to do better. Read performance is much more important to me as the data should only be written once but will definitely be read more than that. Please not that I did not intend portray this test a end-all discussion proof. But simply to look at what the options were and to evaluate their relative performance.Based on my preliminary results including but not limited to this analysis, I elected to store the data using the HDF format as it meets all my requirements and looks to be fairly fast, at least for medium size data. It should also enable the R homies to use it through the excellent rhdf5 library.

So at this point I have decided on a format. The question that remains to be answered is how to organize it. I was thinking of something like this:

|-- Equities
    |-- Stock (e.g. SPY, AAPL etc.)
        |-- Metadata
|-- Forex
    |-- Cross (e.g. USDCAD, USDJPY etc.)
        |-- Metadata
        |-- Aggregated data
        |-- Exchanges (e.g. IdealPRO etc.)
|-- Futures
    |-- Exchange (e.g. CME, ICE, etc.)
        |-- Contract (e.g. ES, CL etc.)
            |-- Metadata
            |-- Continuously rolled contract
            |-- Expiry (e.g. F6, G6, H6 etc.)

Personally not too sure how to best do this. It would seem to me that it would be rather difficult to design a clean polymorphic API to access the data with such a structure but I can’t seem to find a better way.

I would like to hear what readers have come up with to address those problems. In addition to how you store and organize your data, I am very keen to hear how you would handle automating the creation of perpetual contracts without having to manually write a rule for the roll of each product. This has proven to be a tricky task for me and since I use those contracts a lot in my analysis I am looking for a good solution.

Hopefully this discussion will be of interest to readers that manage their own data in-house.

99 Problems But A Backtest Ain’t One

March 23, 2015

Backtesting is a very important step in strategy development. But if you have ever went through the full strategy development cycle, you may have realized how difficult it is to backtest a strategy properly.

People use different tools to implement a backtest depending on their expertise and goals. For those with a programming background, Quantstrat (R), Zipline, PyAlgoTrade (Python) or TradingLogic (Julia) are sure to be favorite options. For those preferring a retail product that involves less conventional programming, Tradestation or TradingBlox are common options.

One of the problems with using a third party solution is often the lack of flexibility. This doesn’t become apparent until one tries to backtest a strategy that requires more esoteric details. Obviously this will not be an issue backtesting the classics like moving averages or donchian channel type strategies, but I am sure some of you have hit your head on the backtest complexity ceiling more than once. There is also the issue of fill assumption. Most backtests I see posted on the blogosphere (including the ones present on this humble website) assume trade on the close price as a simplifying assumption. While this works well for the purpose of entertaining a conversation on the internet, it is not robust enough to be used as the basis for decision making to deploy significant capital.

The first time one can actually realize how good (bad) his chosen backtesting solution is when the strategy is traded live. However I am always amazed how little some traders pay attention to how closely their backtest match their live results. To some, it is like the strategy is the step following the backtest. I think this is missing on some crucially important part of the trading process, namely the feedback loop. There is a lot to be learned in figuring out where the difference between simulation and live implementation. In addition to the obvious bugs that may have passed through testing, it will quickly become apparent whether your backtest assumptions are any good and whether or not they must be revisited. Ideally backtested results and live results for the period which the overlap should be closely similar. If they are not, one should be asking serious questions and try to figure out where the discrepancies come from. In a properly designed simulation on slow frequency data (think daily or longer) you should be able to reconcile both to the penny. If the backtester is well designed, the difference is probably going to center on the fill price at the closing auction being different from the last traded price which is typically what gets reported as the close price. I always like to pay particular attention to the data I use to generate live signals and compare it to the data fed to the simulation engine to find potential signal differences as I often find that the live implementation trades off data that doesn’t always match with the simulation dataset. Obviously, as the time frame diminishes the problems are magnified. Backtesting intraday trading strategies is notoriously difficult and beyond the scope of this blog. Let’s just say that a good intraday backtester is a great competitive advantage for traders/firms willing to put in the development time and money.

It would be negligent of me to complain about backtesting problem without offering some of the processes that I use to improve their quality and ultimately usability. First I personally chose not to use a third party backtesting solution. I use software that I write, not because it is better than other solutions out there but because it allows me to fully customize all aspects of the simulation in a way that is intuitive to me. That way I can tune any backtest as part of the feedback loop I was referring to earlier to more accurately model live trading. Furthermore, as I refined the backtester over time, it slowly morphed into an execution engine that could be used with proper adapters to trade live markets. Effectively I have a customized backtest for each strategy but they all share a common core of code that forms the base of the live trading engine. I also spend quite some time looking at live fill vs simulated fills and try to reconcile the two.

Please do not think that I am trying to tell you that do-it-yourself solution is the best. I am simply saying that it is the one that fits me best. The point I am trying to make herein is that no matter what solution you decide to use, it is valuable to consider the difference between simulated and live results, who knows perhaps it will make you appreciate the process even more.I would be tremendously interested to hear what readers think on the subject, please share some insight in the comment section below so everybody can benefit.


Hello Old Friend

March 17, 2015

Reports of my death have been greatly exaggerated ~Mark Twain

Wow, it has been a while. Roughly four years have gone by since the my last post. It might seem like a long time for some, but coming out of college and hitting the ground running as a full-time trader made it seem like the blink of an eye for me. That being said, I have recently come to miss it and have the intention to start blogging again albeit on an irregular schedule that will evolve based on my free time.

What to expect

Obviously since I have been trading full time my skill set has evolved so I can only imagine that the new perspective I hope to bring to the analysis contained moving forward will be more insightful.

You will notice a few changes, the biggest one being that I no longer use R as my main language. I have all but fully moved my research stack to Python, so you can expect to see a lot more of it moving forward. As for the content, I think the focus will remain the same for the most part; algorithmic trading for the Equities markets.

Special Thank You

Finally I want to take the time to thank the readers that kept emailing and kept in touch during my absence from the blogosphere. I can only hope that the number of people that somehow find value in these short articles will grow over time and that I will meet other interesting people. You are after all the reason I write these notes. So a big Thank You to all of you.


2012 Wishes

December 28, 2011

As we look to 2011 through the rear view mirror, I make a quick divergence from my recent lethargy to send my best wishes for 2012 to all readers. May 2012 be a trading year filled with opportunities you capitalize on.

See you in the new year.


Ensemble Building 101

July 1, 2011

In continuation with last post, I will be looking at building a robust ensemble for the S&P 500 ETF and mini-sized future. The goal here will be to set-up a nice framework that will be (hopefully) of good use for readers interested in combining systems in creating strategies. In order to make it more tangible, I want to create a simplified example that will be used as we move forward in development. I would encourage readers to actively comment and contribute to the whole process.

Before we do anything, as with any project, I want to have an idea of the scope so I don’t get entangled in an infinite loop of development where I am never satisfied and continuously trying to replace indicator after indicator for a marginal increase in CAGR on the backtest. For this example, I want to use 4 indicators (two with a momentum flavour and two with a mean-reversion flavour), a volatility filter and an environmental filter. The strategy will also have an adaptive layer to it.

Now that the foundation idea has been laid, let’s examine the mechanics at a high level, keep in mind that this is only one way you could about it. Basically I will be evaluating each strategy individually, looking at performance under different volatility environment and also with the global environment filter. Based on historical performance and current environment, exposure will be determined.

This series will have R code for the backtest (still debating using quanstrat/blotter) and the simple example will be available for download. My point is to provide reader a way to replicate results and extend the framework significantly. More to follow!


One Size Does Not Fit All

April 24, 2011

In our eternal search for the trading Holy Grail it is often tempting to try and find the “ultimate” signal (indicator) and apply it to as many instruments we can. This single solution approach for the most part fails miserably, think of a carpenter with only a hammer in his (or her; QF is an advocate for equal opportunity) toolbox. While making signals adaptive is definitely an improvement however, I think we sometimes miss the point.

Instead of harassing and optimizing a signal ad absurdum to improve the backtest, one would be better served by looking at the big picture. One signal in itself only contains so much information. While there are a lot of good indicators that perform very well by themselves available (the blogosphere is a really rich ecosystem in that regard); their power is only magnified when combined with other signals containing different information. The secret is in the signal aggregation. In other words, in how we form and use an ensemble of signals isolating different pieces of information to build a profitable strategy (note the use of the word strategy as opposed to system, careful wordsmithing aside, the difference is paramount). This is a topic I have been taking a close look at recently and I think the blogosphere is a perfect tribune to share some of my findings. As a starter, here are some points I will be touching on in the upcoming series of post.
1. What are the basic intuitions behind ensembles and why can they help in building trading strategies?

2. How do we isolate and quantify specific pieces of information and then observe their effect on the instruments we trade.

3. How to we evaluate current pertinence of the signals.

4. Finally, how do we aggregate all the useful information and build a strategy from the ground up.

The mechanics are going to be explained using a simplified example for readers to follow along but the intuition will be the same that the one behind the first QF strategy to be tracked real-time on the blog. I still don’t have a fancy name for it but it’ll get one for its official launch.


Update, Milestone, and Unfinished Business

April 5, 2011

First of all let me apologize for being off the grid for so long and not providing you with any of my geek prose recently. My final university semester classes are coming to an end today and I will resume regular posting after finals. However rest assured that my absence of the blogosphere has not caused quant power atrophy, it was merely a by-product of how busy I was with school and interviewing. Thank you all for sticking up with me during this dry spell.

I recently obtained 50,000 views, while not an incredibly big number compared to the blogosphere’s behemoths (see blogroll), I personally am really happy. I never really thought that this blog would take these proportions and it keeps surprising me by bringing opportunities I never thought would be possible, and for that I must thank you.

Unfinished Business
I know some of you were eagerly waiting for the TAA system post I kept saying would be coming soon; I am sorry to tell you that it will not. My services were hired by a private firm and the intellectual property developed is protected by a non-disclosure agreement. I might however discuss the intuition at a high level if the interest is still present.

Finally some of you might have noticed the new “QF Strategies” tab up top. For now it shines by its emptiness but it will not for much longer. I will be tracking strategies there soon, bear with me.