Algorithm design and correctness

Giving software you wrote access to your or your firm’s cash account is a scary thing. Making a mistake when manually executing a trade is bad enough when it happens (you can take my word for it if you haven’t yet), but when unintended transactions are made by a piece of software in a tight loop it has the potential to be an extinction event. In other words, there is no faster way to self-immolate than to have incorrect software interacting with markets. It is obvious that correctness is a critical part of the design of an algorithm. How does one go about ensuring1 it then?

Abstractly, a trading algorithm is simply a set of specific actions to be taken depending on the state of things. That state can be internal to the algorithm itself (such has open position, working orders etc.) or associated to the external world (current/historical market state, cash balance, risk parameters etc.). It is then natural for me to think of them as finite state machines (FSM). That presents a couple immediate advantages. FSMs are a very well-studied abstraction in the computer science world, there is then no need to reinvent the wheel as best practices are well accepted. Since they are so often used, just about certain that you will be able to find examples of FSMs implemented for you language of choice. For instance, on the Python Package Index a search for “finite state machine” returns over 100 different frameworks. I am confident the results would be similar for just about any modern language. That being said, let’s backup before we dive deeper on some application of the pattern.

Per Wikipedia, a FSM is an abstract machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another in response to some external inputs; the change from one state to another is called a transition. A FSM is defined by a list of its states, its initial state, and the conditions for each transition. We will use the Turtle trading system 2 rules as an example. The rules are summarized below:

Entry: Go long (short) when the price crosses above (below) the high (low) of the preceding 55 days.
Exit: Cover long (short) when the price crosses below (above) the low (high) of the preceding 20 days.

As we learned previously, to fully define a FSM we need a list of its states, its initial state and the conditions for each transitions. We then have to convert these trading rules to a proper FSM definition.

From the rules above I would define the following trading states: TradingState\in\left\{Init,Long,Short,StopFromLong,StopFromShort\right\} In addition the trading state, the following states would be defined as well as they are relevant to the strategy: IndicatorStatues\in\left\{Initializing,Ready\right\} and finally (Low50, Low20, High50, High20)\in\mathbb{R}^{4}. That is to say then that the state space of our algorithm is defined by the following triplet \left\{TradingState,IndicatorStatus,\mathbb{R}^{4}\right\}

If you can forgive my abuse of notation, the following outlines the state transitions and their respective conditions:

\begin{cases} Init\rightarrow Init & IndicatorStatus=Initializing \\Init\rightarrow Long & IndicatorStatus=Ready\land Prc_{t}>High50_{t} \\Init\rightarrow Short & IndicatorStatus=Ready\land Prc_{t}<Low50_{t} \\Init\rightarrow SFL & \emptyset \\Init\rightarrow SFS & \emptyset \\ \\Long\rightarrow Init & \emptyset \\Long\rightarrow Long & Prc_{t}\ge Low20_{t} \\Long\rightarrow Short & Prc_{t}<Low50_{t} \\Long\rightarrow SFL & Low50_{t}\le Prc_{t}<Low20_{t} \\Long\rightarrow SFS & \emptyset \\ \\Short\rightarrow Init & \emptyset \\Short\rightarrow Long & Prc_{t}>High50_{t} \\Short\rightarrow Short & Prc_{t}\le High20_{t} \\Short\rightarrow SFL & \emptyset \\Short\rightarrow SFS & High20_{t}<Prc_{t}\le High50_{t} \\ \\SFL\rightarrow Init & \emptyset \\SFL\rightarrow Long & Prc_{t}>High50_{t} \\SFL\rightarrow Short & Prc_{t}<Low50_{t} \\SFL\rightarrow SFL & Low50_{t}\le Prc_{t}\le High50_{t} \\SFL\rightarrow SFS & \emptyset \\ \\SFS\rightarrow Init & \emptyset \\SFS\rightarrow Long & Prc_{t}>High50_{t} \\SFS\rightarrow Short & Prc_{t}<Low50_{t} \\SFS\rightarrow SFL & \emptyset \\SFS\rightarrow SFS & Low50_{t}\le Prc_{t}\le High50_{t} \end{cases}

Knowing the possible state transitions, we can determine what trading actions are needed in each instance:

\begin{cases} Init\rightarrow Init & \emptyset \\Init\rightarrow Long & Send\ buy\ order\ for\ N\ units \\Init\rightarrow Short & Send\ sell\ order\ for\ N\ units \\Init\rightarrow SFL & \emptyset \\Init\rightarrow SFS & \emptyset \\ \\Long\rightarrow Init & \emptyset \\Long\rightarrow Long & \emptyset \\Long\rightarrow Short & Send\ sell\ order\ for\ 2N\ units \\Long\rightarrow SFL & Send\ sell\ order\ for\ N\ units \\Long\rightarrow SFS & \emptyset \\ \\Short\rightarrow Init & \emptyset \\Short\rightarrow Long & Send\ buy\ order\ for\ 2N\ units \\Short\rightarrow Short & \emptyset \\Short\rightarrow SFL & \emptyset \\Short\rightarrow SFS & Send\ buy\ order\ for\ N\ units \\ \\SFL\rightarrow Init & \emptyset \\SFL\rightarrow Long & Send\ buy\ order\ for\ N\ units \\SFL\rightarrow Short & Send\ sell\ order\ for\ N\ units \\SFL\rightarrow SFL & \emptyset \\SFL\rightarrow SFS & \emptyset \\ \\SFS\rightarrow Init & \emptyset \\SFS\rightarrow Long & Send\ buy\ order\ for\ N\ units \\SFS\rightarrow Short & Send\ sell\ order\ for\ N\ units \\SFS\rightarrow SFL & \emptyset \\SFS\rightarrow SFS & \emptyset \end{cases}

This is obviously a simplistic example meant to illustrate my point but lets consider the design for a moment. First it is obvious that each state is mutually exclusive which is a prerequisite for a valid FSM. That in plain terms means that at any point in time I can evaluate the state and figure out clearly what the algorithm was trying to do since there can only be one possible transition given the state at that specific point in time. That would not have been the case had I decided to define the following transition:

\begin{cases} \cdots \\Long\rightarrow Short & Prc_{t}<Low50_{t} \\Long\rightarrow SFL & Prc_{t}<Low20_{t} \\\cdots \end{cases}

In that case, it would be possible for both the conditions to evaluate true and therefore there are more than one possible state transitions. How would you know looking at the logs what this algorithm tried to do? In this case it is again obvious but in more complex FSM I always find it worth the time to carefully consider the entire state space and clearly define the algorithm behavior in a similar fashion to the above. This might seem very long winded but it is something I do on paper religiously before I ever write a line of production code.

The same pattern can also be used in other parts of your trading stack. For instance, your order management system could define orders as a FSM with the following states: \left\{Sent, Working, Rejected, CxlRequested, Cancelled, CxlRejected, PartiallyFilled, FullyFilled\right\}. The transitions in this case will have to do with the reception of exchange order events such as acknowledgement, reject messages etc. If you stop to think about it, you could design a whole trading stack with almost nothing but FSMs. By using FSMs you can design to insure correctness of your algorithms and eliminate a whole class of potential design flaws that might sneak in otherwise.

The pattern fits nicely in object-oriented design where state and related behaviors can be grouped in nicely decoupled classes. That said, some functional languages provide you a type system that can provide you with additional guarantees when used properly and can help you build some very powerful abstractions. We will examine an example in  a following post. In the meantime, I would be very interested, as always, to hear which patterns you have found useful in your work.


1. [Or as close as one can get to certainty anyway!]

Advertisements

60% of the time, it works every time.

The blogosphere is a very interesting microcosm of the trading world. Many of my older readers will no doubt remember the glory days of “short-term mean-reversion”. By which I mean of course the multitude of posts (including several from yours truly), about RSI2, DV2 and the like. Around 2010 this type of strategy was quite successful and many people put their twist on it posting their results.

Then while this humble publication went into hibernation the collective brain trust of the community turned to the relatively new volatility ETF space. It was glorious; backtests were run, strategies were tweaked, whole websites tracking the strategies popped up and simulated equity curves went to the moon. Life was great. Then on Monday 2018-02-05 the music didn’t just slow down, it stopped. $XIV, the workhorse of many such strategies, there is no nice way to say this; blew up. From what I can see, its demise was met with mixed emotions. Twitter traders with $0.00 AUM knew it all along and were obviously already short $XIV from the high for size. People with subscription strategies either patted themselves on the back for side-stepping the reaper this time, or went AWOL to avoid having to take ownership for the losses incurred by their subscribers. My personal favorite are people selling strategies that usually held $XIV shares as their de-factor short-volatility security declaring that its demise is a non-issue; $UVXY will do the trick just as well!

This demonstrate such a blatant lack of trading IQ I struggle to put into words. The idea that because it was side-stepped this time the next face-ripping event will as well is simply preposterous. Selling volatility is something you do with other people’s money. It’s a great business, you pocket the recurring fees and performance incentives and when the music stop and you lose your client’s money they take all the loss. As Ron burgundy would put it, 60% of the time, it works every time. We would all be so lucky to find such asymmetric payoff propositions for ourselves, I share in the wins now, you get the blowout later, thanks for playing.

The vast majority of such systems I have encountered in the blogosphere were based on term structure signals to determine whether long or short volatility exposure has tailwind. In this particular instance, thankfully for some, the signal to get out of the short happened before the spike. Why should it do that next time, or the time after that?

I’d love to hear your thinking on the subject, esteemed reader. I know short volatility is a popular trade and has been for some time. Are you still going to do it? Are you worried about events such as the one form this past couple days being an issue in the future? Do you want to pay me a monthly fee for putting you in a trade that has an expected value of 0?

I would be down with that if I could sleep at night knowing you take all the risk and I will be the only one left with any profits to show for when the chips land at the end. Unfortunately, I could not live with myself. For those interested, you can look on collective2 but make sure you filter the strategies by performance excluding this week.

Machine learning is for closers

Put that machine learning tutorial down. Machine learning is for closers only.

As some of you that were around back in the early of this blog may know, I always held high hopes for the application of machine learning (ml) to generate trading edges. I think like many people first coming across machine learning the promises of being able to feed raw data in some algorithm you don’t really understand to conjure profitable trading ideas seemingly out of thin air is very appealing. There is only one problem with that; it doesn’t work like that!

Much to my chagrin now, a lot (and I mean a lot) of what this blog is known for is exactly his type of silly applications of ml. With this post, I hope to share some of the mistakes I  made and lessons I learned trying to successfully make ml work for me that haven’t made it on the blog due to my abysmal posting frequency. Here they are, in no particular order:

Understanding the algorithm you are using is important.

It is almost too easy to use ml these days. Multiple times I would read a paper forecasting the markets using some obscure algorithm and would be able to, through proper application of google-fu, find open-sourced code that I could instantly try out with my  data. This is both good and bad; on the one hand it is nice to be able to use the cutting edge of research easily but of the other, should you really give something you don’t understand very well access to your trading account? Many of my dollars were mercilessly lit on fire walking that path. I don’t think you need to master the algorithm to be able to apply it successfully but understand it enough to be able to explain why it might be successful with your specific problem is a good start.

Simple, not easy.

One of my worst flaws as a trader is that I am relentlessly attracted to complex ideas. I dream about building out complex models able solve the market puzzle raking in billions of dollars a la RenTec. Meanwhile, back in the ruthless reality of the trading life, just about all the money I make trading comes from thoughtful application of simple concepts I understand well to generate meaningful trading edges. That however, does not mean that it needs has to be easy. For instance, there are multiple reasons why a properly applied ml algorithm might outperform say ordinary least-squares regression in certain cases. The trick is to figure out if the problem you are currently trying to solve is one of those. Related to the point above, understanding a ml technique allows you to have a better idea beforehand and saves you time.

Feature engineering is often more important than the algorithm you choose.

I cannot emphasize this point enough. A lot of the older posts on this blog are quite bad in that respect. Most of them use the spray-and-pray approach, that is to say put a bunch of technical indicators as features, cry .fit()!, and let slip the dogs of war as data-scientist Mark Antony would say. As you can imagine it is quite difficult to actually make that work and a lot of the nice equity curves generated by these signals don’t really hold up
out-of-sample. Not a particularly efficient way to go about it. Generating good features is the trader’s opportunity to leverage their market knowledge and add value to the process. Think of it as getting the algorithm to augment a trader’s ability, not replacing it altogether.

Ensembles > single model.

Classical finance theory teaches us that diversification through combining multiple uncorrelated bets is the only free-lunch left out there. In my experience, combining models is quite superior to trying to find the one model to rule them all.

Model predictions can themselves be features.

Model-stacking might seem counter-intuitive at first but there were many Kaggle competition winners that built very good models based on that concept. The idea is simple, use the predictions of ml models as features for a meta-model to generate the final output for the model.

I’ll conclude this non-exhaustive list by saying that the best results I have had come from using genetic programming to find a lot of simple edge that by themselves wouldn’t make great systems but when thoughtfully combined create a profitable platform. I will discuss the approach in forthcoming posts.

Give me good data, or give me death

A good discussion not to long ago led me to start a revolution against some data management aspects of my technology stack. Indeed it is one of the areas where the decisions made will impact every project undertaken down the road. Time is one of our most valuable resources and we need to minimize the amount of it we have to spend dealing with data issues. Messy and/or hard to use data is the greatest drag I have encountered when trying to produce research.I had to keep a couple things in mind when deciding on a solution. First, I knew I did not want to depend on any database software. I also knew that I would not be the only one using that data and that although I use Python, other potential users still don’t know better and use R. The ideal solution would be as close to language agnostic as possible. Furthermore, I wanted a solution stable enough that I did not have to worry too much about backward compatibility in case of future upgrade.

With those guidelines in mind, I could start to outline what the process would look like:

  1. Fetch data from vendor (csv form)
  2. Clean the data
  3. Write the data on disk

The biggest decision I had to make at this stage was the format used to store the data. Based on the requirements listed above, I shortlisted a few formats that I thought would fit my purpose: csv, json, hdf5, and msgpack.

At this stage I wanted to get a feel for the performance of each of the options. In order to do that I created a simple dataset of 1M millisecond bars so 4M observations.

In [1]:
import pandas as pd
import numpy as np

#create sizable dataset
n_obs = 1000000
idx = pd.date_range('2015-01-01', periods=n_obs, freq='L')
df = pd.DataFrame(np.random.randn(n_obs,4), index=idx, 
                  columns=["Open", "High", "Low", "Close"])
df.head()
Out[1]:
Open High Low Close
2015-01-01 00:00:00.000 -0.317677 -0.535562 -0.506776 1.545908
2015-01-01 00:00:00.001 1.370362 1.549984 -0.720097 -0.653726
2015-01-01 00:00:00.002 0.109728 0.242318 1.375126 -0.509934
2015-01-01 00:00:00.003 0.661626 0.861293 -0.322655 -0.207168
2015-01-01 00:00:00.004 -0.587584 -0.980942 0.132920 0.963745
Let’s now see how they perform for writing.
In [2]:
%timeit df.to_csv("csv_format")
1 loops, best of 3: 8.34 s per loop
In [3]:
%timeit df.to_json("json_format")
1 loops, best of 3: 606 ms per loop
In [4]:
%timeit df.to_hdf("hdf_format", "df", mode="w")
1 loops, best of 3: 102 ms per loop
In [5]:
%timeit df.to_msgpack("msgpack_format")
10 loops, best of 3: 143 ms per loop
And finally let’s have a look at their read performance.
In [11]:
%timeit pd.read_csv("csv_format")
1 loops, best of 3: 971 ms per loop
In [10]:
%timeit pd.read_json("json_format")
1 loops, best of 3: 6.05 s per loop
In [8]:
%timeit pd.read_hdf("hdf_format", "df")
100 loops, best of 3: 11.3 ms per loop
In [9]:
%timeit pd.read_msgpack("msgpack_format")
10 loops, best of 3: 33.1 ms per loop
Based on that quick and dirty analysis HDF seems to do better. Read performance is much more important to me as the data should only be written once but will definitely be read more than that. Please not that I did not intend portray this test a end-all discussion proof. But simply to look at what the options were and to evaluate their relative performance.Based on my preliminary results including but not limited to this analysis, I elected to store the data using the HDF format as it meets all my requirements and looks to be fairly fast, at least for medium size data. It should also enable the R homies to use it through the excellent rhdf5 library.

So at this point I have decided on a format. The question that remains to be answered is how to organize it. I was thinking of something like this:

/data
|-- Equities
    |-- Stock (e.g. SPY, AAPL etc.)
        |-- Metadata
|-- Forex
    |-- Cross (e.g. USDCAD, USDJPY etc.)
        |-- Metadata
        |-- Aggregated data
        |-- Exchanges (e.g. IdealPRO etc.)
|-- Futures
    |-- Exchange (e.g. CME, ICE, etc.)
        |-- Contract (e.g. ES, CL etc.)
            |-- Metadata
            |-- Continuously rolled contract
            |-- Expiry (e.g. F6, G6, H6 etc.)

Personally not too sure how to best do this. It would seem to me that it would be rather difficult to design a clean polymorphic API to access the data with such a structure but I can’t seem to find a better way.

I would like to hear what readers have come up with to address those problems. In addition to how you store and organize your data, I am very keen to hear how you would handle automating the creation of perpetual contracts without having to manually write a rule for the roll of each product. This has proven to be a tricky task for me and since I use those contracts a lot in my analysis I am looking for a good solution.

Hopefully this discussion will be of interest to readers that manage their own data in-house.

99 Problems But A Backtest Ain’t One

Backtesting is a very important step in strategy development. But if you have ever went through the full strategy development cycle, you may have realized how difficult it is to backtest a strategy properly.

People use different tools to implement a backtest depending on their expertise and goals. For those with a programming background, Quantstrat (R), Zipline, PyAlgoTrade (Python) or TradingLogic (Julia) are sure to be favorite options. For those preferring a retail product that involves less conventional programming, Tradestation or TradingBlox are common options.

One of the problems with using a third party solution is often the lack of flexibility. This doesn’t become apparent until one tries to backtest a strategy that requires more esoteric details. Obviously this will not be an issue backtesting the classics like moving averages or donchian channel type strategies, but I am sure some of you have hit your head on the backtest complexity ceiling more than once. There is also the issue of fill assumption. Most backtests I see posted on the blogosphere (including the ones present on this humble website) assume trade on the close price as a simplifying assumption. While this works well for the purpose of entertaining a conversation on the internet, it is not robust enough to be used as the basis for decision making to deploy significant capital.

The first time one can actually realize how good (bad) his chosen backtesting solution is when the strategy is traded live. However I am always amazed how little some traders pay attention to how closely their backtest match their live results. To some, it is like the strategy is the step following the backtest. I think this is missing on some crucially important part of the trading process, namely the feedback loop. There is a lot to be learned in figuring out where the difference between simulation and live implementation. In addition to the obvious bugs that may have passed through testing, it will quickly become apparent whether your backtest assumptions are any good and whether or not they must be revisited. Ideally backtested results and live results for the period which the overlap should be closely similar. If they are not, one should be asking serious questions and try to figure out where the discrepancies come from. In a properly designed simulation on slow frequency data (think daily or longer) you should be able to reconcile both to the penny. If the backtester is well designed, the difference is probably going to center on the fill price at the closing auction being different from the last traded price which is typically what gets reported as the close price. I always like to pay particular attention to the data I use to generate live signals and compare it to the data fed to the simulation engine to find potential signal differences as I often find that the live implementation trades off data that doesn’t always match with the simulation dataset. Obviously, as the time frame diminishes the problems are magnified. Backtesting intraday trading strategies is notoriously difficult and beyond the scope of this blog. Let’s just say that a good intraday backtester is a great competitive advantage for traders/firms willing to put in the development time and money.

It would be negligent of me to complain about backtesting problem without offering some of the processes that I use to improve their quality and ultimately usability. First I personally chose not to use a third party backtesting solution. I use software that I write, not because it is better than other solutions out there but because it allows me to fully customize all aspects of the simulation in a way that is intuitive to me. That way I can tune any backtest as part of the feedback loop I was referring to earlier to more accurately model live trading. Furthermore, as I refined the backtester over time, it slowly morphed into an execution engine that could be used with proper adapters to trade live markets. Effectively I have a customized backtest for each strategy but they all share a common core of code that forms the base of the live trading engine. I also spend quite some time looking at live fill vs simulated fills and try to reconcile the two.

Please do not think that I am trying to tell you that do-it-yourself solution is the best. I am simply saying that it is the one that fits me best. The point I am trying to make herein is that no matter what solution you decide to use, it is valuable to consider the difference between simulated and live results, who knows perhaps it will make you appreciate the process even more.I would be tremendously interested to hear what readers think on the subject, please share some insight in the comment section below so everybody can benefit.

QF

Hello Old Friend

Reports of my death have been greatly exaggerated ~Mark Twain

Wow, it has been a while. Roughly four years have gone by since the my last post. It might seem like a long time for some, but coming out of college and hitting the ground running as a full-time trader made it seem like the blink of an eye for me. That being said, I have recently come to miss it and have the intention to start blogging again albeit on an irregular schedule that will evolve based on my free time.

What to expect

Obviously since I have been trading full time my skill set has evolved so I can only imagine that the new perspective I hope to bring to the analysis contained moving forward will be more insightful.

You will notice a few changes, the biggest one being that I no longer use R as my main language. I have all but fully moved my research stack to Python, so you can expect to see a lot more of it moving forward. As for the content, I think the focus will remain the same for the most part; algorithmic trading for the Equities markets.

Special Thank You

Finally I want to take the time to thank the readers that kept emailing and kept in touch during my absence from the blogosphere. I can only hope that the number of people that somehow find value in these short articles will grow over time and that I will meet other interesting people. You are after all the reason I write these notes. So a big Thank You to all of you.

QF