There are a number of strategies to make money in the financial markets which are extraordinarily sensitive on the time it takes to deliver an order to an exchange based on incoming market data. Some of these are in the pure arbitrage space (such as an equity which trades in euros on Euronext and pounds on the LSE), but the vast majority of them are more complex (a trivial example: Cisco is part of the NASDAQ index; when the NASDAQ index moves 1 point, index funds will need to buy Cisco to keep up with the index, so buy Cisco if the NASDAQ moves up a point), to the point that you can't possibly describe them in this type of format. (Disclaimer: I've never worked in an ultra-low-latency trading environment; this is entirely based on conversations I've had with those who do).
These guys specialize in minimizing the time it takes for them to get their orders into the exchanges: if they don't get there before any other company trading using essentially the same strategy, they don't make money (and could in fact lose money). For that reason, they'll spend whatever it takes to make sure that they can get orders in faster than their competitors. Specific stuff they're known to do includes:
- Locating their computers in the same facility as the exchange (<sarcasm>because the speed of light in between midtown and Secaucus makes a big difference</sarcasm>).
- Buying their networking hardware based on the number of microseconds latency for packet delivery.
- Buying MOM from ultra-low-latency specialists who focus at the microsecond level (like Solace Systems, who I've already profiled, and 29West, who I haven't)
- Almost completely ignoring Infiniband for no earthly reason that Nolan or I can figure out
- Sometimes using Java, even though it contradicts every single above-mentioned point
The Utility Computing Play
Ultimately, this looks like a market that you actually can hit with a utility/cloud computing play, because if you look at what they're doing, every single tool in their arsenal amounts to one key metric: How fast can you deliver a new piece of market data from its original source to the code which must process it. All the tricks and techniques above are designed to optimize this one single metric.
That to me seems like a metric that a utility computing provider could easily optimize for, and do so in ways that individual firms wouldn't due to resources. For example, most people in this space aren't using Infiniband, even though it's the clear latency winner for the types of distribution that they're doing. Why not? Probably because they don't have the experience in coding against it and deploying it and maintaining it. But why should they care? They want the data into user space as fast as possible. Whether that's done over IB or Ethernet doesn't matter. The key metric for optimization is user-space data delivery.
I think rather than 10 banks and 30 hedge funds all playing the same game here, you're going to see 3-4 utility computing plays which are making an offer roughly along these lines:
- We're going to sell you computing power in some form (bear in mind that these guys usually want lots and lots of CPU time to run analytic models)
- We're going to have an SLA guaranteeing how fast we're going to deliver new data into user-space of your code
- We're going to have an SLA guaranteeing how fast we can deliver data between your various VM slices (and you pay more for lower-latency delivery)
Not all of them of course. There will be a period where there are firms which believe, correctly or not, that they can do better themselves. But this will largely be because they've built up lines of business that won't survive the transition and they want to try to defend them as long as they can. Eventually even they will give up.
First of all, the hosting providers will be able to achieve a level of scale that no single financial services player would. This will allow them to pull R&D in house rather than relying on startups pushing the boundaries of technology. This means that they'll be able even to afford to do custom silicon (maybe working through design agencies, but clearly proprietary hardware configurations).
Second, all trading strategies which rely on acting on market data faster than competitors are going to disappear into Zero Arbitrage Assumption land: no firm will ever be able to outgun the hosting providers. Any hosting provider which ever allowed a single market participant (as opposed to another hosting provider) to get a latency advantage would go out of business quite shortly thereafter.
Third, more players will actually get in the algo trading space. Right now there's a high fixed cost in building out an infrastructure capable of supporting low-latency automatic trading systems. However, the fixed cost allows you to add additional apps easily once you build out your data centres and switches and messaging brokers and all that. Once you reduce that cost to 0, then more people will start dabbling in low-latency algo trading. Hopefully the exchanges are able to handle it, because it's going to increase the number of trades dramatically and introduce some people with some really stupid systems pretty quickly.
Fourth, Silicon Valley/Startup activity in the space will change. A traditional path for a startup looking at the low-latency space, particularly in hardware, is to produce some prototypes, sell them to each of the low-latency customers in financial services one-by-one, exhaust the market, and sell out to Cisco or Juniper or somebody and have them (hopefully) include the underlying technology in their existing product range. This gets disrupted by moving to utility vendors as the only remaining clients; I'm not sure quite what this does to the startup prospects, except that it will definitely limit the number of potential reference customers that a startup can acquire. Each one will be a much harder sell, and require much more time. (Note that this is not unique to the low-latency space; in general, the utility computing transition is shrinking the number of potential customers for most infrastructure hardware, and that's changing the market accordingly. I single this out because specifically targetting the financial services industry's latency-sensitive applications has been a well-known path for Silicon Valley startups, and it's going to convert).
Fifth, more things viewed as an in-house operation in low-latency processing are going to be cloudsourced. Once you have your computational nodes outsourced, and you have your market data delivery outsourced, why not keep going? Why not outsource your live analytics? Or your historical time series storage/lookups? Or your mark-to-market golden copies for end-of-day P&L/Risk? More and more of this is going to be cloudsourced, and very little of it is going to be provided directly by the utility companies: it'll be provided by companies that can span all the utility computing companies.
I think very soon vendors will resolve the regulatory issues and start launching low-latency expert cloud computing solutions specifically targeting the low-latency automatic trading community. This will result in:
- Hosting providers clearly winning the latency war.
- The elimination of most profitable low-latency trading strategies.
- Algo trading will grow as a financial services space.
- Silicon Valley will have to view the market differently.
- More market data services are going to be cloudsourced.