Just a quick set of predictions for 2009. I'll revisit nearer the end of the year to see how I've done.
Messaging Breaks Out
Right now asynchronous systems, particularly ones using message oriented middleware, are still pretty fringe. However, even though I disagree with the technology choice, techniques like XMPP are starting to bring asynchronous communications to the masses. I predict that the extension of XMPP, the forthcoming standardization of AMQP, new infrastructure being developed (like RabbitMQ, OpenAMQ, and Qpid), and extension to new programming languages and application execution environments will all drive more and more developers to finally end polling for updates.
Cloud Will Become Less Buzzy
Right now Cloud/Utility computing is really too buzzy to make heads or tails of, and I think it's suffering from that: when you say "Cloud Computing", it means both nothing and anything. I think 2009 will be the year that people start to solidify what works best in a utility computing space versus a local hosted space, and where hosting providers get mature enough and technology becomes mainstream enough that there's an assumption of utility over local hosting.
Java Will Stagnate
I predict that Java 7, if it even hits by the end of the year, will be pretty anemic and, quite frankly, lame. There will be some interesting enhancements to the JVM, but that's pretty much all that's going to be noteworthy.
C# Will Over-Expand
The entire Microsoft ecosystem will expand and grow beyond the ability of typical Microsoft stack developers to keep up. We're already seeing it with WinForms vs. WPF, LINQ vs. ADO.NET Entity Framework, and whole new technologies like WCF. I predict by the end of the year the Microsoft ecosystem has become so bewildering that while leaders in the community push for more change, day-to-day developers push for a halt to just absorb what's been happening.
Non-Traditional VM Languages Will Break Out
Right now most developers in enterprise systems are coding against the "core" languages (C#, VB.Net, Java) against their respective VMs. This reflects in no small part the maturity and power of their underlying runtimes (the CLR and JVM respectively), but the leading edge of the communities have already started to explore other languages that offer concrete advantages (F#, Scala, Groovy) on the tight runtimes that are already available. This will be the year that those actually break out of the leading edge and into mainstream use.
No-One In Social Networking Makes Money
I still think, by the end of 2009, there won't be a player in the social networking space that is cashflow positive, and I don't think it'll be by choice. I think the technology is so disruptive that we still haven't seen the "right way" to monetize it yet.
Sun Radically Restructures
Sun can't keep burning through cash the way they have been, and they can't continue to have such a chaotic story. At some point in 2009, Sun will relatively radically restructure itself in a bid for survival. Hopefully they'll have read my analysis (part 1, part 2). At the very least, they'll change their ticker away from JAVA.
I think you'll note that all my predictions are fuzzy. That's so that in December, 2009 when we revisit them, we can determine that I was partially right on nearly all of them.
Wednesday, December 31, 2008
2009 Predictions
Labels:
AMQP,
C#,
Java,
Messaging,
RabbitMQ,
Social Networks,
Sun,
Utility Computing
Tuesday, December 30, 2008
SNOS: Quit Angering Andrew Tanenbaum
Recently, someone forwarded me a blog post about a Social Network Operating System and what that might mean. I'll tell you what it means: nothing. Much like the Chewbacca Defense, it doesn't make sense.
An operating system is a system designed to manage low-level hardware resources on behalf of software processes. Let's look at the examples from the original blog post on the evolution of history:
The irony is that what Chris is talking about early in the post is quite a useful thing to have, which is a set of underlying services that will allow people to develop new social applications easily that combine features and data from other social applications. But that's not an Operating System.
It's also not a Relational Database, a B-Tree, a Hash Table, a Compiler, or a Micro-Kernel. All of these things have pretty precise meanings that everybody understands, and the precision facilitates conversation. Let's keep the conversation back on precise terminology. (And inventing your own terminology is fine as well if there's no term that precisely describes your use case [that's how we got Micro-Kernel for example], just don't reuse one).
An operating system is a system designed to manage low-level hardware resources on behalf of software processes. Let's look at the examples from the original blog post on the evolution of history:
- Unix: An operating system. Huzzah! We've correctly identified one!
- J2EE: Not an operating system. Probably not even the right example for this entry, but nevertheless...
- "Something in the browser": Not an operating system. Lest we forget history, Ted Dziuba already told us what we need to know about this subject.
The irony is that what Chris is talking about early in the post is quite a useful thing to have, which is a set of underlying services that will allow people to develop new social applications easily that combine features and data from other social applications. But that's not an Operating System.
It's also not a Relational Database, a B-Tree, a Hash Table, a Compiler, or a Micro-Kernel. All of these things have pretty precise meanings that everybody understands, and the precision facilitates conversation. Let's keep the conversation back on precise terminology. (And inventing your own terminology is fine as well if there's no term that precisely describes your use case [that's how we got Micro-Kernel for example], just don't reuse one).
Monday, December 22, 2008
Scoble Joins The Real-Time Web Conversation
Well, maybe "joins" isn't the proper word, but this showed up on my FriendFeed account today: RSS Shows Its Age. As I've posted about already, I think polling for updates is pretty lame, and asynchronous messaging is the right approach. But yet again, XMPP isn't it.
Enter the next contender: SUP. Took me a while to figure out what precisely it is, but it's essentially a JSON-encoded consolidated feed URL for a batch of URLs. So if you are a service that has a million individual RSS URLs, you provide one SUP URL, which allows clients to make a request and see all URLs that have been updated "recently". The magic sauce is:
But it's still polling! It still ultimately doesn't scale. You can make polling suck less, but it will always suck.
Again, asynchronous message oriented middleware systems are the ultimate solution to this, and anything you do in the interim is only postponing the inevitable.
Enter the next contender: SUP. Took me a while to figure out what precisely it is, but it's essentially a JSON-encoded consolidated feed URL for a batch of URLs. So if you are a service that has a million individual RSS URLs, you provide one SUP URL, which allows clients to make a request and see all URLs that have been updated "recently". The magic sauce is:
- Instead of individual RSS URLs, they get encoded into a nonce (an SUP-ID) to save on transfer costs;
- JSON rather than XML;
- Definition of "recently" is up to the publisher so that it can be statically served and cached and changed depending on publisher server load.
But it's still polling! It still ultimately doesn't scale. You can make polling suck less, but it will always suck.
Again, asynchronous message oriented middleware systems are the ultimate solution to this, and anything you do in the interim is only postponing the inevitable.
Tuesday, December 16, 2008
The Forthcoming Ultra-Low-Latency Financial Apocalypse
This is a follow-on from my previous post on utility computing in financial services, and came out of a discussion between Nolan and me when he was helping review an early draft of the post (yes, I actually do review the blog posts, unlike my email rants). He brought up a good point which I think is going to result in the ultimate apocalypse of much of the ultra-low-latency space in financial services infrastructure.
Ultra-Low-Latency Background
There are a number of strategies to make money in the financial markets which are extraordinarily sensitive on the time it takes to deliver an order to an exchange based on incoming market data. Some of these are in the pure arbitrage space (such as an equity which trades in euros on Euronext and pounds on the LSE), but the vast majority of them are more complex (a trivial example: Cisco is part of the NASDAQ index; when the NASDAQ index moves 1 point, index funds will need to buy Cisco to keep up with the index, so buy Cisco if the NASDAQ moves up a point), to the point that you can't possibly describe them in this type of format. (Disclaimer: I've never worked in an ultra-low-latency trading environment; this is entirely based on conversations I've had with those who do).
These guys specialize in minimizing the time it takes for them to get their orders into the exchanges: if they don't get there before any other company trading using essentially the same strategy, they don't make money (and could in fact lose money). For that reason, they'll spend whatever it takes to make sure that they can get orders in faster than their competitors. Specific stuff they're known to do includes:
The Utility Computing Play
Ultimately, this looks like a market that you actually can hit with a utility/cloud computing play, because if you look at what they're doing, every single tool in their arsenal amounts to one key metric: How fast can you deliver a new piece of market data from its original source to the code which must process it. All the tricks and techniques above are designed to optimize this one single metric.
That to me seems like a metric that a utility computing provider could easily optimize for, and do so in ways that individual firms wouldn't due to resources. For example, most people in this space aren't using Infiniband, even though it's the clear latency winner for the types of distribution that they're doing. Why not? Probably because they don't have the experience in coding against it and deploying it and maintaining it. But why should they care? They want the data into user space as fast as possible. Whether that's done over IB or Ethernet doesn't matter. The key metric for optimization is user-space data delivery.
I think rather than 10 banks and 30 hedge funds all playing the same game here, you're going to see 3-4 utility computing plays which are making an offer roughly along these lines:
Not all of them of course. There will be a period where there are firms which believe, correctly or not, that they can do better themselves. But this will largely be because they've built up lines of business that won't survive the transition and they want to try to defend them as long as they can. Eventually even they will give up.
The Aftermath
First of all, the hosting providers will be able to achieve a level of scale that no single financial services player would. This will allow them to pull R&D in house rather than relying on startups pushing the boundaries of technology. This means that they'll be able even to afford to do custom silicon (maybe working through design agencies, but clearly proprietary hardware configurations).
Second, all trading strategies which rely on acting on market data faster than competitors are going to disappear into Zero Arbitrage Assumption land: no firm will ever be able to outgun the hosting providers. Any hosting provider which ever allowed a single market participant (as opposed to another hosting provider) to get a latency advantage would go out of business quite shortly thereafter.
Third, more players will actually get in the algo trading space. Right now there's a high fixed cost in building out an infrastructure capable of supporting low-latency automatic trading systems. However, the fixed cost allows you to add additional apps easily once you build out your data centres and switches and messaging brokers and all that. Once you reduce that cost to 0, then more people will start dabbling in low-latency algo trading. Hopefully the exchanges are able to handle it, because it's going to increase the number of trades dramatically and introduce some people with some really stupid systems pretty quickly.
Fourth, Silicon Valley/Startup activity in the space will change. A traditional path for a startup looking at the low-latency space, particularly in hardware, is to produce some prototypes, sell them to each of the low-latency customers in financial services one-by-one, exhaust the market, and sell out to Cisco or Juniper or somebody and have them (hopefully) include the underlying technology in their existing product range. This gets disrupted by moving to utility vendors as the only remaining clients; I'm not sure quite what this does to the startup prospects, except that it will definitely limit the number of potential reference customers that a startup can acquire. Each one will be a much harder sell, and require much more time. (Note that this is not unique to the low-latency space; in general, the utility computing transition is shrinking the number of potential customers for most infrastructure hardware, and that's changing the market accordingly. I single this out because specifically targetting the financial services industry's latency-sensitive applications has been a well-known path for Silicon Valley startups, and it's going to convert).
Fifth, more things viewed as an in-house operation in low-latency processing are going to be cloudsourced. Once you have your computational nodes outsourced, and you have your market data delivery outsourced, why not keep going? Why not outsource your live analytics? Or your historical time series storage/lookups? Or your mark-to-market golden copies for end-of-day P&L/Risk? More and more of this is going to be cloudsourced, and very little of it is going to be provided directly by the utility companies: it'll be provided by companies that can span all the utility computing companies.
Conclusion
I think very soon vendors will resolve the regulatory issues and start launching low-latency expert cloud computing solutions specifically targeting the low-latency automatic trading community. This will result in:
Ultra-Low-Latency Background
There are a number of strategies to make money in the financial markets which are extraordinarily sensitive on the time it takes to deliver an order to an exchange based on incoming market data. Some of these are in the pure arbitrage space (such as an equity which trades in euros on Euronext and pounds on the LSE), but the vast majority of them are more complex (a trivial example: Cisco is part of the NASDAQ index; when the NASDAQ index moves 1 point, index funds will need to buy Cisco to keep up with the index, so buy Cisco if the NASDAQ moves up a point), to the point that you can't possibly describe them in this type of format. (Disclaimer: I've never worked in an ultra-low-latency trading environment; this is entirely based on conversations I've had with those who do).
These guys specialize in minimizing the time it takes for them to get their orders into the exchanges: if they don't get there before any other company trading using essentially the same strategy, they don't make money (and could in fact lose money). For that reason, they'll spend whatever it takes to make sure that they can get orders in faster than their competitors. Specific stuff they're known to do includes:
- Locating their computers in the same facility as the exchange (<sarcasm>because the speed of light in between midtown and Secaucus makes a big difference</sarcasm>).
- Buying their networking hardware based on the number of microseconds latency for packet delivery.
- Buying MOM from ultra-low-latency specialists who focus at the microsecond level (like Solace Systems, who I've already profiled, and 29West, who I haven't)
- Almost completely ignoring Infiniband for no earthly reason that Nolan or I can figure out
- Sometimes using Java, even though it contradicts every single above-mentioned point
The Utility Computing Play
Ultimately, this looks like a market that you actually can hit with a utility/cloud computing play, because if you look at what they're doing, every single tool in their arsenal amounts to one key metric: How fast can you deliver a new piece of market data from its original source to the code which must process it. All the tricks and techniques above are designed to optimize this one single metric.
That to me seems like a metric that a utility computing provider could easily optimize for, and do so in ways that individual firms wouldn't due to resources. For example, most people in this space aren't using Infiniband, even though it's the clear latency winner for the types of distribution that they're doing. Why not? Probably because they don't have the experience in coding against it and deploying it and maintaining it. But why should they care? They want the data into user space as fast as possible. Whether that's done over IB or Ethernet doesn't matter. The key metric for optimization is user-space data delivery.
I think rather than 10 banks and 30 hedge funds all playing the same game here, you're going to see 3-4 utility computing plays which are making an offer roughly along these lines:
- We're going to sell you computing power in some form (bear in mind that these guys usually want lots and lots of CPU time to run analytic models)
- We're going to have an SLA guaranteeing how fast we're going to deliver new data into user-space of your code
- We're going to have an SLA guaranteeing how fast we can deliver data between your various VM slices (and you pay more for lower-latency delivery)
Not all of them of course. There will be a period where there are firms which believe, correctly or not, that they can do better themselves. But this will largely be because they've built up lines of business that won't survive the transition and they want to try to defend them as long as they can. Eventually even they will give up.
The Aftermath
First of all, the hosting providers will be able to achieve a level of scale that no single financial services player would. This will allow them to pull R&D in house rather than relying on startups pushing the boundaries of technology. This means that they'll be able even to afford to do custom silicon (maybe working through design agencies, but clearly proprietary hardware configurations).
Second, all trading strategies which rely on acting on market data faster than competitors are going to disappear into Zero Arbitrage Assumption land: no firm will ever be able to outgun the hosting providers. Any hosting provider which ever allowed a single market participant (as opposed to another hosting provider) to get a latency advantage would go out of business quite shortly thereafter.
Third, more players will actually get in the algo trading space. Right now there's a high fixed cost in building out an infrastructure capable of supporting low-latency automatic trading systems. However, the fixed cost allows you to add additional apps easily once you build out your data centres and switches and messaging brokers and all that. Once you reduce that cost to 0, then more people will start dabbling in low-latency algo trading. Hopefully the exchanges are able to handle it, because it's going to increase the number of trades dramatically and introduce some people with some really stupid systems pretty quickly.
Fourth, Silicon Valley/Startup activity in the space will change. A traditional path for a startup looking at the low-latency space, particularly in hardware, is to produce some prototypes, sell them to each of the low-latency customers in financial services one-by-one, exhaust the market, and sell out to Cisco or Juniper or somebody and have them (hopefully) include the underlying technology in their existing product range. This gets disrupted by moving to utility vendors as the only remaining clients; I'm not sure quite what this does to the startup prospects, except that it will definitely limit the number of potential reference customers that a startup can acquire. Each one will be a much harder sell, and require much more time. (Note that this is not unique to the low-latency space; in general, the utility computing transition is shrinking the number of potential customers for most infrastructure hardware, and that's changing the market accordingly. I single this out because specifically targetting the financial services industry's latency-sensitive applications has been a well-known path for Silicon Valley startups, and it's going to convert).
Fifth, more things viewed as an in-house operation in low-latency processing are going to be cloudsourced. Once you have your computational nodes outsourced, and you have your market data delivery outsourced, why not keep going? Why not outsource your live analytics? Or your historical time series storage/lookups? Or your mark-to-market golden copies for end-of-day P&L/Risk? More and more of this is going to be cloudsourced, and very little of it is going to be provided directly by the utility companies: it'll be provided by companies that can span all the utility computing companies.
Conclusion
I think very soon vendors will resolve the regulatory issues and start launching low-latency expert cloud computing solutions specifically targeting the low-latency automatic trading community. This will result in:
- Hosting providers clearly winning the latency war.
- The elimination of most profitable low-latency trading strategies.
- Algo trading will grow as a financial services space.
- Silicon Valley will have to view the market differently.
- More market data services are going to be cloudsourced.
Monday, December 15, 2008
REST Requires Asynchronous Notification
I recently (well, compared to people like Steve Vinoski) got converted to the RESTful architecture movement, but one of the first things that I recognized is that it suffers from the same generic limitation of every HTTP-based component architecture that I've seen: there's no defined way to get updates on content.
In fact, when I started getting more immersed in the REST culture, I realized that a lot of the nuances that are ideal to master in order to provide a great RESTful interface (caching, timeouts, advanced status codes) are there specifically to limit the number of repolls that are necessary against the terminal service by improving the ability of caching proxies upstream to handle requests.
But no matter what, there are clearly latency issues involved in any polling system, particularly one which is using fixed content caching timeouts to optimize performance (because this puts the latency entirely under the control of the publishing system which generated the result of the initial GET command). How do we minimize those?
Pull RSS Is No Substitute
RSS/Atom (note that I'm going to use RSS as a shorthand for the whole RSS/RDF/Atom spectrum of pull-based syndication lists) seems like a reasonable way to minimize the actual content that needs to be polled by batching updates: instead of having clients constantly polling every URL of interest, you batch them up into groups (perhaps one for each resource domain) and provide a single feed on each group. The RSS feed then indicates the state of all the resources in its domain that might have changed in the last period of interest. So now we've reduced hundreds of GETs down to one, but we're still polling.
My old colleague Julian Hyde posted some stuff recently which was precisely along the lines of what we're talking about here (post #1, post#2), in that he's building a live querying system against streams, and finding that the polling nature of RSS is particularly noxious for the type of system they're building. I'm surprised it took him so long. RSS is actually a pretty terrible solution to this problem: it combines polling with extra data (because an RSS feed is generic, it publishes more data than any particular consumer needs) in an inefficient form and still allows for the RSS publisher to put a caching timeout. Wow. Talk about a latency effect!
It gets worse. Let's assume that you're trying to do correlations across feeds. Depending on the cache timeout settings of each individual feed, you might have a latency of 1sec on one feed and 15min on another feed. How in the world do you try to do anywhere near real-time correlations on those two feeds? Answer: you can't. At best you can say "hey, dear user, 15 minutes ago something interesting happened." 15 minutes is a pretty short cache timeout for an RSS feed to be honest, but an eternity to many applications.
The only real advantage that RSS/Atom provides is that because it's based on HTTP, and we have all these skills built up as a community to scale HTTP-based queries (temporary redirects, CDNs, caching reverse proxies), we can scale out millions of RSS queries pretty well. Problem? All those techniques increase latency.
Enter AMQP
Anyone with any experience with asynchronous message oriented middleware can see that fundamentally you have a one-to-many pub/sub problem here, and that smells like MOM to me. More specifically, you have a problem where ideally you want a single publisher to be able to target a massive number of subscribers, where each subscriber gets only the updates that they want without any polling. Wow. That looks like exactly what MOM was designed for: you have a broker which collects new messages, maintains subscription lists to indicate which consumers should get which messages, and then routes messages to their consumers.
It's so clear that some type of messaging interface is the solution here that people are starting to graft XMPP onto the problem, and using that to publish updates to HTTP-provided data. Which would be nice, except that XMPP is a terrible protocol for asynchronous, low-latency MOM infrastructure. (If nothing else, the XML side really will hurt you here: GZipping will shrink XML quite a bit, at a computational and latency cost; a simple "URL changed at timestamp X" is going to be smaller in binary form than GZipped XML and pack/unpack much faster).
But we've already got another solution brewing here: AMQP, which simultaneously provides:
Combining REST with AMQP
At my previous employer, I actually built this exact system. In short, resources also had domain-specific MOM topics to which the REST HTTP system would publish an asynchronous message when any update to a resource happened. Clients would hit the HTTP endpoint to initialize their state, and immediately setup a subscription to updates to the separate MOM system.
Without the MOM component, it's just pure REST. Once you add it, you eliminate any resource or RSS polling at all. The MOM brokers are responsible for making sure that clients get updates that they care about, and there's only the latency that the broker introduces to updates. Worked brilliantly: all the benefits of a RESTful architecture, with none of the update latency effects.
We had it easy though: this was all purely internal. We didn't have to operate at Internet Scale, and we didn't have to interoperate with anybody else (no AMQP for us, this was all SonicMQ as the MOM broker).
In order to get this to work as a general principle, we as a community need to do the following:
In fact, when I started getting more immersed in the REST culture, I realized that a lot of the nuances that are ideal to master in order to provide a great RESTful interface (caching, timeouts, advanced status codes) are there specifically to limit the number of repolls that are necessary against the terminal service by improving the ability of caching proxies upstream to handle requests.
But no matter what, there are clearly latency issues involved in any polling system, particularly one which is using fixed content caching timeouts to optimize performance (because this puts the latency entirely under the control of the publishing system which generated the result of the initial GET command). How do we minimize those?
Pull RSS Is No Substitute
RSS/Atom (note that I'm going to use RSS as a shorthand for the whole RSS/RDF/Atom spectrum of pull-based syndication lists) seems like a reasonable way to minimize the actual content that needs to be polled by batching updates: instead of having clients constantly polling every URL of interest, you batch them up into groups (perhaps one for each resource domain) and provide a single feed on each group. The RSS feed then indicates the state of all the resources in its domain that might have changed in the last period of interest. So now we've reduced hundreds of GETs down to one, but we're still polling.
My old colleague Julian Hyde posted some stuff recently which was precisely along the lines of what we're talking about here (post #1, post#2), in that he's building a live querying system against streams, and finding that the polling nature of RSS is particularly noxious for the type of system they're building. I'm surprised it took him so long. RSS is actually a pretty terrible solution to this problem: it combines polling with extra data (because an RSS feed is generic, it publishes more data than any particular consumer needs) in an inefficient form and still allows for the RSS publisher to put a caching timeout. Wow. Talk about a latency effect!
It gets worse. Let's assume that you're trying to do correlations across feeds. Depending on the cache timeout settings of each individual feed, you might have a latency of 1sec on one feed and 15min on another feed. How in the world do you try to do anywhere near real-time correlations on those two feeds? Answer: you can't. At best you can say "hey, dear user, 15 minutes ago something interesting happened." 15 minutes is a pretty short cache timeout for an RSS feed to be honest, but an eternity to many applications.
The only real advantage that RSS/Atom provides is that because it's based on HTTP, and we have all these skills built up as a community to scale HTTP-based queries (temporary redirects, CDNs, caching reverse proxies), we can scale out millions of RSS queries pretty well. Problem? All those techniques increase latency.
Enter AMQP
Anyone with any experience with asynchronous message oriented middleware can see that fundamentally you have a one-to-many pub/sub problem here, and that smells like MOM to me. More specifically, you have a problem where ideally you want a single publisher to be able to target a massive number of subscribers, where each subscriber gets only the updates that they want without any polling. Wow. That looks like exactly what MOM was designed for: you have a broker which collects new messages, maintains subscription lists to indicate which consumers should get which messages, and then routes messages to their consumers.
It's so clear that some type of messaging interface is the solution here that people are starting to graft XMPP onto the problem, and using that to publish updates to HTTP-provided data. Which would be nice, except that XMPP is a terrible protocol for asynchronous, low-latency MOM infrastructure. (If nothing else, the XML side really will hurt you here: GZipping will shrink XML quite a bit, at a computational and latency cost; a simple "URL changed at timestamp X" is going to be smaller in binary form than GZipped XML and pack/unpack much faster).
But we've already got another solution brewing here: AMQP, which simultaneously provides:
- Binary protocol (for performance)
- Open protocol (for compatibility)
- Open Source implementations
- Built by MOM experts
Combining REST with AMQP
At my previous employer, I actually built this exact system. In short, resources also had domain-specific MOM topics to which the REST HTTP system would publish an asynchronous message when any update to a resource happened. Clients would hit the HTTP endpoint to initialize their state, and immediately setup a subscription to updates to the separate MOM system.
Without the MOM component, it's just pure REST. Once you add it, you eliminate any resource or RSS polling at all. The MOM brokers are responsible for making sure that clients get updates that they care about, and there's only the latency that the broker introduces to updates. Worked brilliantly: all the benefits of a RESTful architecture, with none of the update latency effects.
We had it easy though: this was all purely internal. We didn't have to operate at Internet Scale, and we didn't have to interoperate with anybody else (no AMQP for us, this was all SonicMQ as the MOM broker).
In order to get this to work as a general principle, we as a community need to do the following:
- Finish AMQP 1.0 (I've been told the specs are nearly ready and then just comes the implementation and compatibility jams);
- Decide on a standard pattern for REST updates (topic naming, content encoding, AMQP broker identification in HTTP response headers);
- Develop easy to use publisher and subscriber libraries (so that services and client packages can easily include this functionality across all the major modern languages);
- Develop AMQP brokers and services that can operate at Internet Scale.
Purity vs. Pragmatism
I was being interviewed for a new job last week (yes, I am actively on the market), and had a very interesting, frustrating run-in with one of the interviewers (disclaimer: by mutual acclaim, the role and I decided we weren't right for each other; you'll figure out why shortly) at a large bank.
We Frustrate Each Other
The frustrating part of the interview came when I realized that The Interviewer (we'll abbreviate it to TI, not to be confused with the hip-hop artist T.I.) and I were disagreeing on virtually every point of philosophical substance. But that was just a manifestation of a broader disagreement that I think went to the core of why the two of us must never work together: we fundamentally disagree with the core principle of software engineering.
Let me explain with a few examples of where we differed to try to explain what was going on (no, this is nowhere near an exhaustive list):
Purity
TI's approach to every major difference between the two of us fell down on the side of rigid, unbending application of a principle in the interests of purity. My approach is far more fluid and contextual.
Purity in any endeavor (art, design, architecture, music, religion, software engineering) is attractive because it strips away all thought and all decisions, and in doing so, pushes a concept to its ultimate expression. I can understand the sentiment. When I moved into my flat, every single surface was either white (floors, walls, ceilings, some doors) or gray metal (stairs, shelves, other doors). It's a minimalist, pure aesthetic, and it removes all distraction and makes it very simple to make decisions: there's nothing subjective about additions.
Sometimes, purity is exactly what you want. It focuses you, and allows you to fully explore one concept to its extreme (how white can you get the walls and floors? can you get them the same white even though you have to use different paints for different surfaces? can you keep the floor white even though people walk on it and put furniture on it?). Even the exploration of pure silence has begat its own groundbreaking work.
Purity in Software Engineering
Taking a purist approach to a software engineering matter allows you to nail your banner on the church for all to see: I believe in X, therefore X is always correct; by applying X to every situation, I prove the superiority of X and validate my initial conclusion that X is superior. This comes up a lot, particularly in architectural discussions:
And that's why I view them as intellectual cowardice: by limiting your choices intentionally, by limiting the scope of tools at your disposal, by closing your mind off, you reduce the amount of thought you have to put in. But as engineers, as craftsmen, we live by our minds: we are paid to think, and the more we're paid, in general, the more thinking we're expected to do.
TI could replace himself with someone far less experienced by simply writing his own DSL that works on the JVM that doesn't allow you to do any of the things that he thinks are wrong, and forces you to do all the things that he thinks are right. It wouldn't be that hard. Then he can simply hand that off to someone and say "here you are; a language that obeys every one of my edicts perfectly; you are sure to do an excellent job now that I've handcuffed you to my beliefs."
[Aside: if TI's beliefs realy were so universal as to allow him to view me with revulsion for not sharing them, why isn't there already the TI programming language? Are you to tell me that there isn't a programming language which targets some target environment (JVM, CLR, raw x86 machine code) that forces the practices that he likes and forbids the practices that he doesn't? Really? Does that tell you something perhaps? Because it's not like we have an absence of programming languages these days. And it's not like it's that particularly hard to write a language and target one or more existing VM environments, so all the hard work is taken care of already.]
In Defence Of Pragmatism
I'm impure. I'm about as tainted as you can get. And you know what? I think that makes me more effective, rather than less. Because what I get out of a lack of purity is pragmatism. The two can't coexist particularly effectively: I think you fundamentally agree with one or the other.
Pragmatism allows me to look at a problem, carefully consider the advantages and disadvantages to each potential solution, and then determine the right approach to each particular situation.
Pragmatism allows me the flexibility to do things that I know in other circumstances would be wrong, but in that particular one would be right.
Pragmatism allows me to have a bag of tricks rather than one, and pull them out as I see fit.
Pragmatism allows me to gradually refine my beliefs by using both my favored and unfavored approaches in a variety of circumstances so that I have evidence and backup behind me when I express my opinion.
Pragmatism allows me to work with a variety of code written by a variety of people and engage with it constructively as it is, rather than seeking to rewrite it before I'd be willing to touch it.
Pragmatism allows me to decide where to focus development efforts based on what's most important that day: sometimes rushing a feature into production to make more money, sometimes spending time testing the bejebus out of a small change before I make it.
Pragmatism forces me to build software which is easy to maintain and consistent in its internal architecture and clean in its modularization and consummately tested, because that's the only way to maintain a complex system over time. Pragmatism also tells me that if I know the piece of development is only there to run a single time, focusing on all of that is wasted time better spent elsewhere.
Pragmatism allows me to assess the strengths and weaknesses of my team, and the constraints of my customers, before assessing the correct development practices that will result in that unique group of people producing the best software possible with the fewest resources in the shortest possible time. Pragmatism forces me to understand that no one methodology could possibly work for all engineers, all customers, and all projects.
Pragmatism allows me to focus on one and one thing only: getting the job done well. Do what it takes, but get it done, and make sure it works properly. And that's precisely what we're here to do as software engineers: engineer software. And engineering, unlike art, design, music, or any purely creative endeavor, requires getting something done that works. We're not in this business to craft a purist expression of an idea. We're here to build things that work.
The irony is that many purists get that way out of a misguided belief that they're being pragmatic: buy choosing a single technology or technique or methodology, they've chosen the One True Solution and don't have to consider the same decision with over and over. Furthermore, they've made sure that those people working for/with them (particularly the lesser skilled ones) don't do The Wrong Thing. But that means that they've closed their minds off to the chance that they've chosen incorrectly. Or, even more appropriately, that there is no one right decision.
That's why purity and pragmatism can't coexist: purity requires that you ignore the results in favor of an ideological ideal; pragmatism is all about pursuing results for their own sake.
And that's why I think TI and I couldn't work together: I'm a pragmatist and he's a purist. The two of us were bound to clash, and at least we discovered it quickly.
But in the end, I know my approach is right. I'm proud to be a pragmatist.
Except when it comes to TCL. Man, I wish I could banish it from the world.....
We Frustrate Each Other
The frustrating part of the interview came when I realized that The Interviewer (we'll abbreviate it to TI, not to be confused with the hip-hop artist T.I.) and I were disagreeing on virtually every point of philosophical substance. But that was just a manifestation of a broader disagreement that I think went to the core of why the two of us must never work together: we fundamentally disagree with the core principle of software engineering.
Let me explain with a few examples of where we differed to try to explain what was going on (no, this is nowhere near an exhaustive list):
- He believed that checked exceptions in Java are always wrong; I believe that sometimes they're useful and sometimes they're not.
- He believed that you should only accept or return interfaces (and never a concrete class) from a module; I believe that you pick and choose between interfaces and POJOs depending on the context.
- He believed that setter-based Dependency Injection is always wrong and that only constructor-based DI should be used; I believe that you pick the right one for the context.
- He believed that you can only ever use DI with a DI container like Spring or PicoContainer; I believe that it's an architectural principle that can applied with or without DI-specific tools.
- He believed that you cannot consider yourself a practitioner of agile methodology without rigidly adopting a Formal Agile Methodology (Scrum, XP, whatever); I believe that the whole point of agile methodologies is that you pick amongst the parts that help your team develop better software faster.
Purity
TI's approach to every major difference between the two of us fell down on the side of rigid, unbending application of a principle in the interests of purity. My approach is far more fluid and contextual.
Purity in any endeavor (art, design, architecture, music, religion, software engineering) is attractive because it strips away all thought and all decisions, and in doing so, pushes a concept to its ultimate expression. I can understand the sentiment. When I moved into my flat, every single surface was either white (floors, walls, ceilings, some doors) or gray metal (stairs, shelves, other doors). It's a minimalist, pure aesthetic, and it removes all distraction and makes it very simple to make decisions: there's nothing subjective about additions.
Sometimes, purity is exactly what you want. It focuses you, and allows you to fully explore one concept to its extreme (how white can you get the walls and floors? can you get them the same white even though you have to use different paints for different surfaces? can you keep the floor white even though people walk on it and put furniture on it?). Even the exploration of pure silence has begat its own groundbreaking work.
Purity in Software Engineering
Taking a purist approach to a software engineering matter allows you to nail your banner on the church for all to see: I believe in X, therefore X is always correct; by applying X to every situation, I prove the superiority of X and validate my initial conclusion that X is superior. This comes up a lot, particularly in architectural discussions:
- Asynchronous Messaging is great! Let's use it for everything!
- An RDBMS is great! Let's use it for everything!
- REST is great! Let's use it for everything!
- Google Protocol Buffers is great! Let's use it for everything!
- Cubes are great! Let's use them for everything!
- Lisp is great! Let's use it for everything!
- XML is crap! Let's banish it from the world!
- RPC is crap! Let's banish it from the world!
- Solaris is crap! Let's banish it from the world!
- RDBMSes are all crap! Let's banish them from the world!
- TCL is crap! Let's banish it from the world!
- Lisp is crap! Let's banish it from the world!
And that's why I view them as intellectual cowardice: by limiting your choices intentionally, by limiting the scope of tools at your disposal, by closing your mind off, you reduce the amount of thought you have to put in. But as engineers, as craftsmen, we live by our minds: we are paid to think, and the more we're paid, in general, the more thinking we're expected to do.
TI could replace himself with someone far less experienced by simply writing his own DSL that works on the JVM that doesn't allow you to do any of the things that he thinks are wrong, and forces you to do all the things that he thinks are right. It wouldn't be that hard. Then he can simply hand that off to someone and say "here you are; a language that obeys every one of my edicts perfectly; you are sure to do an excellent job now that I've handcuffed you to my beliefs."
[Aside: if TI's beliefs realy were so universal as to allow him to view me with revulsion for not sharing them, why isn't there already the TI programming language? Are you to tell me that there isn't a programming language which targets some target environment (JVM, CLR, raw x86 machine code) that forces the practices that he likes and forbids the practices that he doesn't? Really? Does that tell you something perhaps? Because it's not like we have an absence of programming languages these days. And it's not like it's that particularly hard to write a language and target one or more existing VM environments, so all the hard work is taken care of already.]
In Defence Of Pragmatism
I'm impure. I'm about as tainted as you can get. And you know what? I think that makes me more effective, rather than less. Because what I get out of a lack of purity is pragmatism. The two can't coexist particularly effectively: I think you fundamentally agree with one or the other.
Pragmatism allows me to look at a problem, carefully consider the advantages and disadvantages to each potential solution, and then determine the right approach to each particular situation.
Pragmatism allows me the flexibility to do things that I know in other circumstances would be wrong, but in that particular one would be right.
Pragmatism allows me to have a bag of tricks rather than one, and pull them out as I see fit.
Pragmatism allows me to gradually refine my beliefs by using both my favored and unfavored approaches in a variety of circumstances so that I have evidence and backup behind me when I express my opinion.
Pragmatism allows me to work with a variety of code written by a variety of people and engage with it constructively as it is, rather than seeking to rewrite it before I'd be willing to touch it.
Pragmatism allows me to decide where to focus development efforts based on what's most important that day: sometimes rushing a feature into production to make more money, sometimes spending time testing the bejebus out of a small change before I make it.
Pragmatism forces me to build software which is easy to maintain and consistent in its internal architecture and clean in its modularization and consummately tested, because that's the only way to maintain a complex system over time. Pragmatism also tells me that if I know the piece of development is only there to run a single time, focusing on all of that is wasted time better spent elsewhere.
Pragmatism allows me to assess the strengths and weaknesses of my team, and the constraints of my customers, before assessing the correct development practices that will result in that unique group of people producing the best software possible with the fewest resources in the shortest possible time. Pragmatism forces me to understand that no one methodology could possibly work for all engineers, all customers, and all projects.
Pragmatism allows me to focus on one and one thing only: getting the job done well. Do what it takes, but get it done, and make sure it works properly. And that's precisely what we're here to do as software engineers: engineer software. And engineering, unlike art, design, music, or any purely creative endeavor, requires getting something done that works. We're not in this business to craft a purist expression of an idea. We're here to build things that work.
The irony is that many purists get that way out of a misguided belief that they're being pragmatic: buy choosing a single technology or technique or methodology, they've chosen the One True Solution and don't have to consider the same decision with over and over. Furthermore, they've made sure that those people working for/with them (particularly the lesser skilled ones) don't do The Wrong Thing. But that means that they've closed their minds off to the chance that they've chosen incorrectly. Or, even more appropriately, that there is no one right decision.
That's why purity and pragmatism can't coexist: purity requires that you ignore the results in favor of an ideological ideal; pragmatism is all about pursuing results for their own sake.
And that's why I think TI and I couldn't work together: I'm a pragmatist and he's a purist. The two of us were bound to clash, and at least we discovered it quickly.
But in the end, I know my approach is right. I'm proud to be a pragmatist.
Except when it comes to TCL. Man, I wish I could banish it from the world.....
Saturday, December 13, 2008
Meaningless, Programming, and Innate Affinity
Clay Shirky guest post on BoingBoing called "Comfort with meaninglessness the key to good programmers". The comments are filled with "my programming language r0x0rZ" and suchlike, but it's another perspective on the often-espounded (not least by yours truly) theory that being an excellent programmer is an innate characteristic. You can be trained to be a software engineer of some skill (and large firms are stocked with hundreds of the type, who are trained to implement business logic in a programming language, or in pixel-pushing GUIs), but there appears to be something innate in making a programmer.
Or, as I was recently told, "so what you're saying is that all of you are naturally freaks."
Or, as I was recently told, "so what you're saying is that all of you are naturally freaks."
Friday, December 12, 2008
Real Time Linux Kernel Smacktown Comments
I read El Reg's writeup of Red Hat and Novell attempting to out-gun each other on their RT-esque kernel patch versions of their main Linux distros. Just a few quick comments.
First of all, they're testing against RMDS 6 (something I have some experience with). When you see anything about an RMDS 6 P2PS, what you're really looking at is an ancient branch of RVD running the TCP form of the Rendezvous protocol. That means that a P2PS esssentially acts as a middleware broker in a bizarre mixture of a protocol designed for unreliable broadcast/multicast usage (Rendezvous) but running on a single machine. It's a quite strange beast to be fair. Just a bit of background detail for you there.
Secondly, where things get really strange in this is when Infiniband comes in on the Novell SLERT side. Uhm, huh? They're combining RMDS with IB? In what form? Are they doing IP/IB? Uhm, why? I'm sorry, but if you're going to roll out IB, without using something IB specific like OpenFabrics that allows you to leverage all that nice IB RDMA action, there's something seriously wrong. Yes, you might shave off a few microseconds off your latency compared to IP/Ethernet, but you'd get much better by coding raw against IB. Based on that, I'm actually surprised that Novell included this at all, since I really doubt that anybody is going to attempt to improve the performance of their RMDS infrastructure by running it over IB.
So all that being said, this type of benchmark is really just a proxy for something testing how fast the kernel patches can get networking packets off the wire and into user-space for processing. That's a pretty interesting point and good to know, but if you really care about that and don't have to run RMDS for everything in the first place, why not use something like Solace which doesn't have user-space in the first place? Why not just code against IB using OpenFabrics? This is far more interesting to me in the client side (where you're going to have user-space code ultimately doing your processing) than on the broker (e.g. P2PS) side.
So that being said, I think a very interesting benchmark would be:
The client is where I think the action needs to be here, not the broker.
First of all, they're testing against RMDS 6 (something I have some experience with). When you see anything about an RMDS 6 P2PS, what you're really looking at is an ancient branch of RVD running the TCP form of the Rendezvous protocol. That means that a P2PS esssentially acts as a middleware broker in a bizarre mixture of a protocol designed for unreliable broadcast/multicast usage (Rendezvous) but running on a single machine. It's a quite strange beast to be fair. Just a bit of background detail for you there.
Secondly, where things get really strange in this is when Infiniband comes in on the Novell SLERT side. Uhm, huh? They're combining RMDS with IB? In what form? Are they doing IP/IB? Uhm, why? I'm sorry, but if you're going to roll out IB, without using something IB specific like OpenFabrics that allows you to leverage all that nice IB RDMA action, there's something seriously wrong. Yes, you might shave off a few microseconds off your latency compared to IP/Ethernet, but you'd get much better by coding raw against IB. Based on that, I'm actually surprised that Novell included this at all, since I really doubt that anybody is going to attempt to improve the performance of their RMDS infrastructure by running it over IB.
So all that being said, this type of benchmark is really just a proxy for something testing how fast the kernel patches can get networking packets off the wire and into user-space for processing. That's a pretty interesting point and good to know, but if you really care about that and don't have to run RMDS for everything in the first place, why not use something like Solace which doesn't have user-space in the first place? Why not just code against IB using OpenFabrics? This is far more interesting to me in the client side (where you're going to have user-space code ultimately doing your processing) than on the broker (e.g. P2PS) side.
So that being said, I think a very interesting benchmark would be:
- Similar workload in general (the STAC workloads actually are representative of the general workload you get in financial market data processing);
- Hardware distribution to the clients (Tervela, Solace, or OpenFabrics/IB);
- Clients different between the RT-patch version of the kernel and stock kernel.
The client is where I think the action needs to be here, not the broker.
Subscribe to:
Posts (Atom)