Sunday, March 29, 2009

From The Economist Entrepreneurship Special Report

In the March 14 2009 Economist Special Report on Entrepreneurship, one choice snippet to cut out (emphasis added by yours truly):

The European venture-capital industry, too, is less developed than the American one (significantly, in many countries it is called "risk" capital rather than "venture" capital). In 2005, for example, European venture capitalists invested EUR12.7 billion in Europe whereas American venture capitalists invested EUR17.4 billion in America. America has at least 50 times as many "angel" investors as Europe, thanks to the taxman's greater forbearance.

Yet for all its structural and cultural problems, Europe has started to change, not least because America's venture capitalists have recently started to export their model. In the 1990s Silicon Valley's moneybags believed that they should invest "no further than 20 miles from their offices", but lately the Valley's finest have been establishing offices in Asia and Europe. This is partly because they recognise [sic.] that technological breakthroughs are being made in many more places, but partly also because they believe that applying American methods to new economies can start a torrent of entrepreneurial creativity.

I've been writing about this exact same thing, and I think it's becoming clear to the general VC community: investing money wisely leads to cultural changes that allows you more opportunities for investing in the future. Much as a Valley GP may invest in an untried entrepreneur, expecting that while this one may not pop the next one will, one can view an entire market the same way.

Otherwise excellent special report. You should read it; it will make you smarter.

Wednesday, March 25, 2009

Recruiter Tip #1: Don't Block Your Phone Number

(Part of a series I'm starting on how to be a better technical recruiter).

One of the things that's most annoying about dealing with technical recruiters is that they all (well, almost all) block their phone number when they call you. I'm not entirely sure the reason for this, except that maybe they think that if you know it's a recruiter you won't pick up. The problem is that at this point, the only people that ever call me who don't have Caller ID enabled are technical recruiters, so a Blocked number automatically triggers "this is a recruiter" in my mind.

If you want to stand out as a candidate-friendly recruiter, don't block your number. More importantly, have the caller ID you present be your direct line, not the main line for your company.

Here's why you want to enable caller ID:

  • If I'm actively looking for a job, I may field 5-10 calls from recruiters a day. Some of them I want to talk to, some of them I don't have time for (if I'm rushing to an interview, I want to take a call from the recruiter that put me up for that interview in case it's a change of plans; otherwise, I don't have time for you). Help me figure out whether I want/need to take your call when you make it.
  • When you call me, I can easily take your phone number and put it into my phone so that I can call you back easily. If I'm out and about, I don't have paper and pen handy, so just telling me your phone number is largely useless, since I can't take it down. I can call you back much more easily if you give me your number.
  • Many recruiters don't leave messages. I want to know who are being annoying and calling me 5 times a day without leaving a message so that I can put them in my bad list. Vice versa, if I get 5 blocked calls and one of them is you and you're thoughtful enough to leave a voice mail, you get kudos.
  • I have Visual VM on my iPhone. That doesn't help me replay your message when every single voice mail pending is from Blocked.
  • If I can't pick up the phone because I'm in a meeting or something, it makes it very easy for me to see it's you and email you back saying "hey, was this urgent or did you just want to check up on me?" Remember, in finance we have to plan on privacy in advance.
  • If I'm a hiring manager, I expect the vast majority of recruiters calling me are cold calling me trying to find out if they can recruit for me. I want to pick up the phone for the recruiters that I'm actively on a search with, while ignoring the ones I'm not. If you're working for me, I want to know that so that I can pick up the call while sending the others to voice mail. So if you have an agreement with my firm, you have no reason not to announce your presence before I pick up.
  • Rather than a main line, I want to be able to call you back directly. Particularly if I'm dealing with different recruiters from the same firm, one of whom I want to talk to and the other I don't.

If your IT staff tell you that they can't unblock your number or can't set it up so that your outbound number is your direct line, they're lying.

You have nothing to lose here if you're behaving ethically. So stand up with other ethical businesses and announce your caller ID.

Tuesday, March 24, 2009

Best Effort Delivery Fails At The Worst Time

In an early post in the blog, I discussed the two fundamental types of MOM infrastructure: broker-based and distributed. A particular thing I wanted to address is the best-effort delivery model inherent in distributed MOM infrastructure.

Background on Best Effort Systems

As a reminder, distributed systems operate a series of broker instances, where each one is located in the same process, VM, or OS as the code participating in the message network. For example, it might be a library that's bound into your process space (like 29West), or it might be a daemon that is hosted on the same machine (like Rendezvous). These distributed components then communicate over a distributed, unreliable communications channel (such as Ethernet Broadcast or IP Multicast). [1]

Conventionally, publishers in these systems have no knowledge of the actual consumers, and no access control is thus possible (how can you enforce AuthN or AuthZ if you're just bit-blasting onto a wire?). They simply fire-and-forget in the most literal form, as they're sending over a network protocol that has no concept of acknowledgment. So they issue bits onto the wire, and hope they get to any recipients. Management of these systems involves setting up your networking infrastructure to handle IP multicast properly, or making sure you have sufficient ethernet subnet-to-subnet bridges (rvrds if you're working with Rendezvous) to make sure that publishers and consumers can communicate.

The only guarantee you have is one of "best-effort." What this means is that publishers and consumers will do everything reasonable to ensure that messages get from publisher to consumer, without sacrificing speed or adding unnecessary latency into the network. The moment you have a slow consumer, or you have a network glitch, or a publisher goes down, that's it; messages get lost or thrown away, and you don't know about it (most of the time; some systems have an optional "you just lost messages" signal you can receive, but in the worst case your client can't tell that it's lost anything at all). That's because in these scenarios, anything that you can do to increase the application semantics will cost you in performance. If you don't need it, why pay for it?

This is very different from guaranteed-delivery semantics, where the middleware is responsible for throttling and caching such that messages aren't lost unless something really drastic happens; simply throwing away a message because an internal queue was starting to fill up isn't something you expect from those systems.

And what you get out of this is a lot of raw speed. No TCP stacks getting in the way, no sliding windows to deal with ACK messages, no central routing brokers. Fast, low-latency messages. Sure, they're unreliable, but do some interesting stuff with the protocol, and consumers who miss messages can even request the ones that they missed from the publisher (so best-effort can be pretty-darn-good-indeed effort).

This is seductive, and it's particularly seductive to developers. Setting up a simple Rendezvous (or other low-latency, best-effort delivery system) network is something you can do without your IT staff's involvement. There's no discussion of ports and broker hardware and networking topologies and involvement with your companies MOM Ops Team or anything else. Just start banging messages around, and look! It all works! And that's when things get problematic.

Best Effort = Development Win, Production Fail

Because the problem is that Best Effort systems seem much better than that in development. In development, you can run days without dropping a single message, no matter what size it is. In testing, you can run many test iterations without being able to force a message drop [2]. More importantly, you really can't force a message drop, at least not without implementing a lossy Decorator on top of the MOM interfaces. And that's a problem.

Developers easily self-justify both of these situations:

  • Yeah, so 25% of that message handling code isn't covered by my functional tests. Who cares? That's just because like RuntimeException handling, it's going to be so rare that there's no point forcing the test. If it was going to be more common, the functional and integration tests would already expose it.
  • Why would I spend 40% of my time working on 25% of the code that I can't get to naturally execute? That's completely retarded. I need to get back to making my Traders happy by adding more functionality to the system and sticking more data into Excel. Get back to me when the Big Bad Error Case actually happens.
And why wouldn't you accept those scenarios? If you don't, you're stealing from the company through bad prioritization.

The problem is that development infrastructure in no way represents the production user's infrastructure, particularly in a Financial Services context. And the causes of message drop tend to only happen under serious load, which usually means significant market activity, which usually means it's the absolute worst time to drop a message and have to engage in some form of retry behavior that's thoroughly untested in anger.

Consider these differences:

  • In development, you're the only app running that is using the specified MOM system (because like most developers, you work on one system at a time). In production, your users probably have three different apps all stressing the underlying networking infrastructure for the same MOM system.
  • In development, your networking system is probably under minimal load; aside from accessing your SCM system and streaming MP3s to your colleagues and reading tech blogs and tweeting [3], you're not really taxing it; your production users are pushing their networking hardware as hard as they can (mostly loading and saving 100MB Excel files to network stores, but that still eats bandwidth).
  • In development, you typically have single use cases: either big messages or small ones, but not mixed; your production users have them mixed all around.
  • Production users and systems have a lot more running on them (if traders, they'll have Bloomberg and Reuters and 8 Excel processes and 2 trading systems and your apps all on the same machine). Under peaks, everything suffers simultaneously as resources that otherwise are plentiful become quite scarce.
  • In development, if your machine crashes, you have a lot more to worry about than whether that test case ran successfully; in production, machines crash and die all the time and the only thing your users care about is getting themselves up and running as fast as possible. [4]
  • For intra-data-center communications, your development machines are used for coding and compiling and testing; your production machines are churning through messages 24x7 and are in a much noisier set of switches usually. That has effects that you can't evaluate, because you're not allowed to play on the production hardware.
As a result, expect at least 500% more packets to be dropped in production than in development. [5] But if you've been properly prioritizing your development, you've never tested this situation. So you're hosed.

Effective Use of Best Effort Systems

So how do you effectively use distributed/low-latency/best-effort MOM systems?
  • Always make sure your application has a non-best-effort delivery mechanism. Most of the problems with best-effort systems only affect you if you think your application should only have one MOM system, and you know that parts of your application are best suited to a distributed MOM infrastructure. If you are ever using a best-effort MOM system, start with a guaranteed-delivery MOM system and downgrade certain traffic accordingly. Never start in the other direction.
  • Don't ever use a best-effort system for a human-interacting environment (though the back-end processors for a human front-end system is fine). Seriously. You don't need it (humans can't detect latency below 100ms anyway), and you're just going to thrash their PC. Don't do it.
  • Make sure messages are as idempotent as possible. While tick delivery systems based on deltas are extremely useful, try to make them as prone to interpretation assuming individual message loss as possible. For example:
    • Base all delta ticks on deltas from start of day, not from last tick (because you might lose the previous tick)
    • When shipping only those fields which have changed, also ship the most important other fields (so if shipping just a change to the Ask Size, also ship the basic Bid/Ask/Trade sextuplet of data)[6]
  • Every time you publish or subscribe using a best-effort system, ask yourself "What happens if this message disappears completely?" If the answer is not "whatever," don't use that system and/or upgrade your semantics.
  • Any time you are using these systems, have a dedicated network completely shut off from other networking uses. If you have multiple major use cases (think: equities and fixed income, or tick delivery and analytics bundles) on the same machine, use completely different hardware for those as well. Combining traffic on the same network interface is the biggest cause of message loss that I've seen. If your networking team can't/won't do this, get a new networking team.
  • Have those network interfaces (one per message context) hooked up to the same physical switching fabric wherever possible to minimize the amount of potential loss in inter-switch communications.

These guidelines essentially amount to the following:

If you think you want to use a distributed MOM system, you probably don't. If you know you do, only use it for server-to-server communications, and make your physical layout match your logical layout as much as possible.

Conclusion

This essay will probably seem pretty down on best-effort systems; that's intentional. I think these systems have their place still, but for most systems their place is back in history, when switches were unnecessary latency additions and broker-based MOM systems added yet more latency and hardware-based TCPoEs didn't exist. If you're giving up 500ms of latency by putting a broker in between publisher and consumer, you'd be stupid to do so if you can avoid it. But that's not the case anymore. Modern brokers can measure their additional latency in microseconds with far better than broadcast/multicast-based best-effort delivery; if you can do that well with a broker in the middle, why sacrifice application semantics to remove it?

That's where I think the future in this space is with systems like 29 West and not with systems like Rendezvous: 29 West is adding interesting functionality that you want, and better (think: non-publisher-host-based) resilience, while still having a non-broker-based infrastructure. While I still think it's fringe, Rendezvous is just stalled as a product, and it's not going anywhere. [7]

So the next time you're sitting there coding against Rendezvous and thinking "you know, let's face it, this best-effort thing is a sham; all my messages show up!" just remember: the moment you assume that, it's going to bite you in the ass. And it'll probably happen on the last day of the quarter when volatility is up 25% and your head trader is having a down day. And then you'll wish you had sacrificed those 10us of latency for better-than-best-effort delivery.

Footnotes:

[1]: In JMS terms, imagine an embedded JMS driver which communicates without a broker, just through IP broadcast and multicast with every other JMS driver on the network.
[2]: Even worse, because many distributed tests will launch all components on one machine, you never exercise the network effects that distributed MOM systems entail. This makes sense from a development efficiency perspective, but is particularly problematic in a distributed MOM situation, because the chances of dropping a message fall to pretty much 0 in such a scenario.
[3]: Assuming you don't work for Big Bank B, where all such forms of non-authorized-communications are Strictly Prohibited.
[4]: This is a particular worry if your Best Effort to Better Than Best Effort upgrade is based on disk or other publisher-side mechanisms: what happens when the publisher's machine goes down completely? Or, what happens if it's mostly down but the fast way to resolve the situation to your ops team is to swap it out before it fully fails?
[5]: No, I'm not going to specify this more fully, because it's so prone to variation based on the runtime environment. Just trying to say that if you drop one message a day in dev, assume at least 5 per endpoint per day. Or more.
[6]: Having periodic snapshot ticks (so adding the sextuplet to the tick that happens every 5 seconds, or manually pumping a snapshot every 5 seconds under low activity) is one alternative here, but what happens if you lose the snapshot tick?
[7]: Except, bizarrely, into the realm of broker-based MOM by virtue of the Solace-powered Tibco Messaging Appliance.

Thursday, March 19, 2009

AMQP: Community, Patents, Openness

(Background: My initial rant on the Red Hat patent situation; my rant on the Red Hat press release).

There's been a lot of public discussion recently regarding AMQP and patents, specifically USPTO 20090063418. For the record, I will be formally objecting to the USPTO about this particular patent; regardless of who filed it or when or on what, it appears to fail the Obviousness test and I have been provided with quite relevant Prior Art that indicates that the patent should not be granted. This has nothing to do with the authors of the patent, but with the fact that I don't believe the patent is worthy of grant by the USPTO.

And for the record, I personally do not believe that software should be allowed to be patented, or that building up "defensive" patents are a good corporate response to the current insanity. However, I also understand that this is a decision that is made at board level, and no single employee below board level is responsible for their firm's patent policy, whether it be Microsoft, Red Hat, IBM, Cisco, or Some Random Startup.

Do I think that Red Hat behaved in a manner that I would deem admirable given their action in patenting an extension to an open standard? No. Do I think it's the fault of employees of Red Hat who are quite clearly feeling attacked by me? No. Just as these employees had no control over Red Hat's position on patents, they also had no control over whether they were allowed to speak to any particular person at any given time. Their lawyers were in control, and part of being an employee means following board-level rules. I don't know why their lawyers said that they couldn't disclose anything about the patent or issue particular statements at any given time, and neither do you (unless you're one of them).

Yesterday's AMQP PMC meeting was the first time that the PMC members had a chance to talk since the patent was unveiled and all the bad blood was unleashed. Here's my current take on the situation, and constructive suggestions to move forward.[1]

What Is The Specification?
First of all, a lot of the confusion and requirements of clarification come down to the precise nature of the patent grants that are part of joining the AMQP Working Group. Here's the problem: they're not public. More importantly, the definition of the term "the Specification," which appears to cause the biggest problems, isn't public. I searched on the AMQP web site, and they're not there. If I'm wrong, please let me know, but I sure as heck don't see them anywhere.

That makes it very difficult for anyone on the outside to verify any of the claims that people are making. I can't imagine that this is that commercially sensitive, heck, the W3C license grants are all public. Putting this into the public would solve a lot of problems here.

You Can't Steal People's Patents
Someone commented to me "well, why not just expand the spec every time a patent comes out?" This is prone to the precise "proprietary vendor fear" that the existing patent would be prone to. Let's look at the hardware vendors.

People like Tervela and Cisco, who are members of the AMQP Working Group, and Solace, who I've urged to become a member, have patents. And unlike the AMQP+XQuery patent, some of theirs actually are things that I think would be worthy of patenting. Having a policy that says "If you join the AMQP Working Group, we can render any of your patents moot by simply saying 'what your patent covers is now an Optional Extension to the AMQP Specification'" is just as much a kiss of death as saying "here's a patent that we filed that means if you add AMQP to your proprietary software product we'll sue you."

This matters because, as I've said repeatedly, if AMQP does not end up encompassing the existing proprietary MOM vendors, it's not going to win.

Disclose What You Can't Disclose
The Red Hat press release, in particular, was taken by many people (some who had nothing to do with AMQP and were neutral third-parties) as unnecessarily hostile, mostly because of what wasn't said. And I know that Red Hat employees have been completely absent from all public communications (excluding anonymous ones). These will be carefully orchestrated by the lawyers. That's a matter of board-level policy.

Here's what the board of any FOSS vendor who's dealing with patents needs to do: lay out the ground rules for what their employees can say and do, and tell the rest of us. Unless you do that, when you don't make a comment about a particular area, the rest of the world doesn't know why. Is it because there is malicious intent? Is it because your lawyers won't let you? Is it because you didn't deem it necessary? We don't know. We want to know, and I can't see a single rational basis for saying "we cannot tell you what we can and cannot talk about, even in the most generic terms." Meet us half-way.

Open Standards Are Exactly That
People participating in this process should expect that this is an Open Standard, and Open Standards are messy. They're delightfully, angrily, bloggingly, twitteringly messy. You think this little incident was unacceptably messy? You been following the closure debates in Java? What about HTML 5 or XHTML 1? ODF/OOXML?

My point here is that open standards are, by their very nature, subject to a large group of people piping up from the sidelines. They won't always say what you want, and they won't always behave the way you want. And that's still valuable, because they add content. This probably just happens to be the first time that I know of that AMQP has been prone to a big public fight. Get used to it.

This is actually a Good Thing. How many people do you think actually know what AMQP is now versus a week ago? How many people do you think are following what's happening in the community now versus a week ago? Public fights are signs that a community is healthy. Otherwise you end up with an ANSI Committee of the Good And Worthy And Irrelevant, and nobody listens to them anymore (I have my "pet committees" there, and not all of them are like this, but we all know some like these). They're closed, they deliberate in private for years, and only then do they hand something down from on high that the industry ignores because it's moved on in its absence. Debate, unruly as it is, means you're doing something right.

A Way Forward
I'd like to be so bold as to propose a constructive suggestion on a way to move forward. These are entirely my ideas and I've not discussed them with anybody yet.[2]

  1. Publish key licensing definitions. I know that some components of membership may be legally or commercially sensitive. However, if we (as users and developers) are going to trust that everything is 100% on the level, we have to know the key terms under which patents are handled. In particular, we need to know core definitions for things like "the Specification."
    While many of these terms may seem self evident from someone in the PMC, they aren't for those of us who aren't. And that information disparity can lead to a lot of argumentation as people argue from different levels of knowledge. Level the playing field, even if only a little bit.
  2. Determine an optional but interoperable feature policy. This should be for things that are not part of the required specification (and for the record, I strongly favor a minimal specification), where you still want interoperable implementations of common scenarios. This is where the Red Hat AMQP+XQuery patent would lie, because if I want to do XML-based CBR, I most certainly don't want to have 8 versions that are all just a tiny bit different to avoid each other's patents. Same with geospatial routing, dynamic multi-hop routing, whatever.
  3. Assemble a clearing house for prior art. There's a lot of prior art in messaging that can hopefully develop into a massive database that can be used for the community to collectively fight against patents from non-members. Let's get this going. I don't know how to do this, but I'm sure there's a way.
  4. Communicate more openly. Usually in a situation like this from another spec or working group, people from the spec committee would be talking constantly with the community in a constructive fashion. That didn't happen here. There was nothing public from the PMC for days while the process deteriorated until members of the PMC started making very inflammatory statements in public.
    As I mentioned earlier, you have to expect stuff like this is going to happen, and at least one or two people have to be empowered to talk openly and early, and need to do so. AMQP is turning the corner from something that was in a vendor niche to something that people are starting to Really Care About, and that means that you have to start dealing with people who aren't on the PMC on a regular basis, whether you like it or not.
    And some of those will be differences of opinions between PMC members which will spill out into the public domain. This is normal and healthy, and as long as people can remain civil and constructive, there's no reason why all communications should be limited to PMC-only. That's not constructive, because you will never effectively present a public face, so you have to allow for some public discussions when the PMC doesn't have a standard, quickly available response.

I want AMQP to work. I want the spec to be adopted. I want asynchronous MOM-based communications to flourish. I think we can move past this and AMQP will be stronger as a result.

Footnotes:
[1]: This is all from purely publicly available information.
[2]: For the first and last time, I am not a motherfarking sock puppet. If you think there's ever been anyone who can control what I say, you're sorely mistaken. Employers (e.g. Big Bank B) can tell me to not talk at all, but I'm not the type of person who will just repeat a party line with a straight face if I don't agree.

Wednesday, March 18, 2009

AMQP: Why Do I Care?

UPDATE 2009-08-18: I've been told that I signed an AMQP Reviewer Agreement, which is a different document from full licensees. I've removed the footnote as it turns out it was completely incorrect.

Aside from the fact that this blog is called "Kirk's Rants" for a reason, I get especially strident when I start talking about AMQP. So much so that in addition to a comment on one of my recent Red Hat posts, I've been asked by an online magazine (as part of an interview process) why in the world I care so much. It's simple: I care because I've been bitten in the past.

This blog's history with AMQP goes way back. In fact, it's my #2 post label (only immediately under the more generic Messaging). Let's take a trip down memory lane, shall we?

My first ever post on the subject was a lamentation. I had been having problems with the C++ client libraries for SonicMQ (an otherwise fantastic product, I must say). I grabbed the JMS client for qPid Java, and RabbitMQ, and they just didn't work together. Surely, however, that was the whole point of AMQP as an exercise? And I've been involved ever since.

I've read drafts of specifications. I've signed a Reviewer Agreement with the AMQP Working Group (though I'm not a member of it). I've met John O'Hara and Pieter Hintjens and Alexis Richardson and Carl Trieloff and gotten about as involved as you can be and not work for an MOM producer. I've gotten involved.

But why? Because it matters.

I've worked in anger with:

In addition, I've done formal evaluations on:

Every single one of them has bitten me in the ass before, usually around client libraries. For that reason, most applications are written against one broker technology, whether it's the appropriate one for all use cases in the application or not. Or, a whole firm's MOM infrastructure will be written against one broker technology, to limit all the problems that come in when applications have to support multiple client libraries, or have to select them for each application.

I view this as something that holds back MOM and keeps it in a Financial Services Ghetto. I don't view it as odd that the "Messaging Is Not Just For Investment Banks" presentation at qCon attracted more people than the AMQP presentation. Getting your head around what you can do with fully asynchronous fire-and-forget APIs is hard enough; getting your head around the fact that if you're not adhering perfectly to the JMS spec [2] you have to learn and work with completely different client APIs, half of which are broken at any given time is just too much. It's why so many people just jump into using XMPP: it may not have the best semantics, but at least it interoperates. This needs to end.

For MOM to reach completely widespread adoption, I need to tie my clients to my programming language and operating environment, not to what the provider of my broker technology is willing and capable of supporting. In order to make that a viable proposition, I need to consolidate to one protocol. Therefore, in order to get to a situation where enough engineering work is going into the client libraries to make them really really good, I need to have very few of them, namely 1.

Imagine if the web didn't have HTTP. Imagine if instead of having one protocol that would talk to everything I had to have separate client libraries for Apache, and Netscape Web Server, and IIS, and Zeus, and anything else that I wanted to support. Imagine, further, that Apache is 2% of the market. That's the situation we're in with messaging at the moment. To break out of that situation, we need the HTTP of messaging (only with less text and more binary, natch). That's AMQP.

Again, remember the Apache being 2% of the market scenario. In order for AMQP to be a success, it has to embraced by the 98% of the market using proprietary solutions, so that applications which rely on those solutions can be converted. Once those applications are converted to AMQP, users (not developers) can try different brokers and see if there are perhaps ones that suit their applications better. Once that happens, we'll have proper innovation in the MOM space. I want to live to see that day.

I've spent a lot of time with Alexis Richardson and the rest of the RabbitMQ team. Given that they're in London, I'm in London, I personally get along with them down t'pub, and I spent a fair amount of time hanging out while looking for a Cash Money Gig, this shouldn't be surprising. Furthermore, the fact that I originally discovered AMQP through Steve Vinoski discussing that Erlang is ideally suited to Message Oriented Middleware implementation and thus RabbitMQ was a really good use of Erlang means of course I'm going to be interested in hanging out with them. However, let me be very clear about something: I have never received a single pound from any provider of message oriented middleware solutions [3]. Since starting this blog ferreals I've received money from exactly two sources: Derivatives Company A, and Big Bank B.

In sum, I care about this stuff because proprietary MOM clients have bitten in me in the ass way too many times, and because fundamentally I hate closed protocols. I'm a practitioner in this space, who never (again) wants to:

  • deal with another application that can send out messages to Broker Technology A, where I'm using (or want to use) Broker Technology B;
  • be stopped from using Interesting Technology C because the Broker Technology I have to connect to doesn't support it;
  • be nearly into production with a system, only to find out that the client library has a hideous bug that I can't fix because it's a proprietary protocol.
In short, I care because I'm experienced enough in the space that I've learned to care.

Every practitioner should care so much about this type of thing. If you don't care anymore about the fundamental constraints of the space in which you develop, why are you still in the game?

Footnotes:
[1]: REMOVED (see update at top of post).
[2]: Not using Java? So sorry, there's nothing equivalent in any other language, and each vendor decides how to map JMS onto C++ in a completely different way.
[3]: Okay, I let Hans Jesperson from Solace buy me a hot chocolate. Guilty!

Tuesday, March 17, 2009

Red Hat's AMQP+XML Patent Press Release

Red Hat has finally come out with a statement on the AMQP Patent (USPTO 20090063418) application. I have written about this already, prior to the press release coming out.

First of all, the people who are the most rabidly angry about this are other AMQP leaders (leaders of other projects, members of the AMQP Working Group). This isn't a case where (to the best of my knowledge) Tibco is coming out and saying "don't trust those crazy AMQP people, look, they've got patent issues too!" Rather, this is a bunch of people who feel quite betrayed by someone they wanted to view as a positive member of the community who shared similar desires and goals to the rest of the community. FUD doesn't apply here, as those of us with legitimate concerns are hardly going to be spreading FUD about the thing (AMQP) that we are really pushing for.

Next, the Patent Promise and Estoppel issues notwithstanding, the AMQP working group is a mixture of commercial and open source vendors. Red Hat's Patent Promise only applies to Open Source vendors. Given that AMQP, in order to be a success at all, must attract support from commercial vendors who have no safe harbour under the Red Hat Patent Policy, this patent has the ability to poison the entire community. This is a key part of my original argument, and there's been no response from Red Hat on this.

Finally, let's take one paragraph which is, quite frankly, fictitious:

Although there have been some recent questions about one of our patent applications relating to the AMQP specification, they appear to originate in an attempt to spread FUD. There’s no reasonable, objective basis for controversy. In fact, the AMQP Agreement, which we helped to draft, expressly requires that members of the working group, including Red Hat, provide an irrevocable patent license to those that use or implement AMQP. In other words, even if we were to abandon our deeply held principles (which we will not), the AMQP Agreement prevents us from suing anyone for infringing a patent based on an implementation of AMQP specification. Moreover, our Patent Promise applies to every patent we obtain. Red Hat’s patent portfolio will never be used against AMQP, and Red Hat will support any modifications to the specification needed in future to verify that fact.

The factual inaccuracy here comes from the fact that the AMQP Agreement covers an irrevocable patent license for any patents necessary to implement the specification. AMQP XQuery Exchanges aren't part of the single document AMQP Specification. Therefore, the AMQP Agreement doesn't factor in. Therefore, this patent is not subject to an automatic grant. Remember, this patent doesn't cover AMQP itself (and nobody's ever claimed that it does). Rather, it patents a quite obvious extension to AMQP. That's the problem: it's extension rather than core.

The only way around this at the moment given the licensing of the AMQP specification would be to add XQuery-based Exchanges to the AMQP specification (make it core rather than extension). However, since most work on the AMQP specification is to make it smaller and focus on standardized/interoperable extensions, this isn't a great solution.

In short, not impressed with the response. I didn't expect more, but this isn't a great state of play for those of us who care about efficient, vendor neutral message oriented middleware.

I am aware that the AMQP Working Group is working within themselves to try to resolve this situation, and I wish them all the best luck in that!

Monday, March 16, 2009

Red Hat's AMQP+XML Patent Application: Stupid, Shifty, Short-Sighted

UPDATE at bottom of post
I've finally gotten word back from my moles that Red Hat has essentially stated that Patent Application 20090063418 is a defensive patent on an extension to AMQP to work around the broken US patent system.

This patent was filed for in 2007 [1]. The current state of it is that it's been opened up to public review, which is part of the new patent system process for allowing public feedback on "controversial" patents. Essentially, as of now, infringement rights have started (so if the patent actually is granted, if you infringe as of now, they can go after you), but the patent has not been granted. This process is set up so that you can object to the patent, usually via prior art (so if, for example, you were running AMQP with XML before 2007, or you have documentation in the public space that this was a known concept for extension, please let me or the patent office know ASAP).

The way the IP rights in the AMQP working group operate is that each member must agree to disclose and grant rights on any IP that they might hold that would be necessary to implement the AMQP specification, so that the specification isn't prone to stealth patents after ratification. So if Red Hat had gone for a patent on Direct Exchange Routing, they would have had to disclose that and license anybody else in the working group, rendering the patent effectively useless. However, because XML-based exchanges, while the most common example for a custom Exchange type in the AMQP literature, are not necessary for implementation of the spec, Red Hat were quite freely able to stealth file this patent.

While legally they are thus in their rights to file the patent, this is a really bad maneuver on the part of Red Hat.

Let's assume that it is in fact a defensive patent, as Red Hat are claiming. If it is, why not disclose it? And who are they attempting to defend, the wider AMQP community or themselves in a patent war with IBM or someone else? If they're worried that someone else was going to come in and patent this precise thing, why not just put prior art into the wild so that the community could oppose anybody attempting to patent just this thing?

Let's take worst-case scenario here:

  • Solace Systems decides to implement AMQP in hardware
  • They already have a hardware XML-based routing system (equivalent to the patent when not applied to AMQP)
  • They release the product into the wild
  • Red Hat realizes that they're not selling much MRG, because people prefer the Solace solution
  • Red Hat sues Solace for damages
Do I think Red Hat is likely to do this? No. Do I think if I'm a commercial vendor I have to assume that Red Hat might do this at some point over the 17-year lifespan of the patent (assuming it actually is granted)? Yes, I have to. For all I know, Red Hat will sell the patent to a litigation firm and lose control over it in 13 years time, and then sue me for infringement 13 years in the past.

Yes, the US patent system is horribly, horribly broken. But looking for opportunities to build a defensive patent portfolio around an open standard is exactly what's wrong with the US patent system. More importantly, it's really bad for AMQP itself, and thus bad for Red Hat.

As we discovered at the qCon AMQP talk last week, one standard concern that potential adopters have is "when are we gonna get a 'real' company implementing this?" Not many companies are currently eager to run something that doesn't have companies with deep pockets behind it (some are because it's open source, some aren't). Why would I be an early adopter of any AMQP implementation when the spectre of stealth patents filed by AMQP implementers on any obvious extension to the spec may be lurking around? Why adopt anything if doing so opens me up to the threat of a lawsuit, when there isn't anybody around with deep pockets able to offer me indemnification as of yet?

The main factor here is that AMQP is not going to get beyond the early-adopter ghetto until and unless it is embraced by commercial vendors of MOM technology. To get past the initial early adopter hurdle, you need to eventually draw in Progress, IBM, Oracle, Tibco, Fiorano, all these guys. And virtually all of them have XML-based routing technology. What Red Hat has effectively done is said "the moment you support AMQP in your product, without neutering the XML-based routing you already have, we can sue you." What incentive do those firms have to adopt AMQP and thus build the community?

The blowback is already starting even in the Open Source world. Martin Sustrik has stated to me:

We are suspending any work on AMQP support in 0MQ till the issue is resolved.

In effect, this completely poisons the well of the AMQP community, and I fail to see how, short of Red Hat granting control over the patent to some other entity, they can recover from this. It would not surprise me if this effectively kills AMQP, and with it the entire basis for the messaging component of MRG.

In sum, this was a really, really, really short-sighted move on the part of Red Hat, and I highly encourage the community to fight this patent on grounds of Obviousness and Prior Art, both of which I think it fails in. And then to never trust Red Hat again in any standardization process.

Footnotes:
[1] : This effectively means that pretty much the first thing they did on getting involved with AMQP was to file patents on it. What a lovely approach!

UPDATE 2009-03-17
Red Hat has come out with a press release on this issue. Link to the press release and my response in my follow-up post on the subject.

Sunday, March 15, 2009

USPTO #20090063418 - Yes, I'm Aware

This has hit Slashdot, so given my audience I have to respond. Yes, I've been aware of this for several days. In the interest of fairness, I'm waiting for Red Hat to make a comment that might be accessible to me.

I'm not willing to wait forever, and I'm not pleased about the situation. Considering I met Carl Trieloff for the first time at qCon this week, I'm going to let them respond, but my patience isn't eternal.

In summary, assume the following:

  • I know of the patent application;
  • I'm apoplectic with rage (like 3 on the Kirk anger scale[1]);
  • I'm swallowing that anger until I hear a good explanation for the patent application.

Footnotes:
[1]: A scale invented by ex-coworkers. Most people register in milli-Kirks at best. Yes, this is "Kirk's Rants" for a reason. :-)

Friday, March 13, 2009

Bamboo 2.2 - This is a Game Changer

Earlier this week, Atlassian announced Bamboo 2.2 had been released. Due to qCon pressures I hadn't had a chance to do a writeup on it until now, but I definitely wanted to mention it. It's a doozy.

The headline feature in 2.2 (and this is so big that I'm shocked that they didn't bump up to 3.0) is Elastic Bamboo. In essence, this allows you to use Amazon EC2 instances as your remote build agents, seamlessly. That's huge. Really huge.

Let's assume you're a developer trying to work out your build and test strategy. Your options right now are to:

  • Run all your build agents under the Bamboo process. Not an option if you have to work with a heterogeneous environment, so I'll consider this the Hobbyist approach.
  • Stick build agents on your various development machines as-and-when they match the exact configuration that you want to be able to test under. Doing this, though, will unpredictably mess with the machine's performance as Bamboo decides to take away all the resources to run a build-and-test.
  • Push VMWare deep into the organization so that you can do your own VMWare instances to match the configurations you want to test against. Doing this, though, means that you have to have these instances up and running pretty much all the time, whether you're testing against them or not.
  • Buy hardware just for testing. Uhm, haven't you heard? Capex is SOOO 2007, particularly for development infrastructure. Spend your Capex on 30" displays, yo.

Those are all fine options if you have friendly IT people who love doing things for developers as fast as possible. I seldom run into those. Elastic Bamboo gives you the option of running your build agents in EC2, starting them up to run a build and shutting them down when the build is done. Just think about that. You now have the option to have unlimited build-and-test configurations, completely outside the control of your IT department, and only pay for them when you're actually testing against them! Have a configuration you only supported for an old release that only a few customers are using? No worries, leave the instance in place, and it'll only launch when you change that really old branch. Have a configuration that only one customer is stupid enough to use in production? No worries, just leave the instance configured and run it every week or two. You now have no excuses for not exhaustively testing against your matrix all the time (I'm talking to you, Sonic!).

Along the way, they ended up realizing precisely what I realized when I tried to run build agents in between London, New York and Hong Kong, which is that Bamboo is a hog when it comes to log and artifact shipping over high latency links (and often low latency ones as well). They've fixed that, as if you're developing in Sydney and the closest AWS site you have is US East, you can't rely on Lan speeds anymore. All in all Atlassian resolved 5 issues that I was the original filer for [1]. That's a great track record, and I don't know very many commercial software companies that I could find that out for in the public record. Yet another reason I shill for Atlassian constantly. [2]

In addition to the whole "game-changing" thing, this is also extremely cool from a utility computing play. You probably could have done this yourself already (subject to the artifact/log shipping problems). But they're doing it for you. I think we're going to see that more and more from commercial software vendors, as they add support for major cloud computing platforms into their in-house software networks. Don't be surprised if you start to see anything that could benefit from cloud CPU resources start automatically coming AWS enabled.

I got cornered down t' pub at qCon by The Build Doctor and gave an only slightly inebriated rant about Electric Bamboo. For those of you who don't know what I look or sound like, here I am in all my gesticulating glory:

Major Caveat: My old employers, who are nothing if not early adopters for anything, upgraded an existing infrastructure with a lot of MSBuild, and it's not going particularly well for them. It appears that the officially closing off of BAM-1413 (MSBuild support [3]) actually broke the MSBuild support that was part of the NAnt plugin (which already shipped with Bamboo and supported MSBuild to great success). This is causing massive problems for the dozens of MSBuild-based projects at my old cash money gig. When it gets resolved I'll update.

UPDATE:: If you upgrade to the newest (2.1.9) NAnt plugin it'll fix the problems.

Minor Caveat: The Pre-Post Build Command Plugin doesn't work with 2.2. My old coworkers hacked a local version to get it to work, but there was an API change that broke the plugin.

The worst thing? I don't even get to use Bamboo at my current cash money gig. We're stuck with Electric Commander [4]. Sigh.

Footnotes:
[1]: For those counting: BAM-3177, BAM-1831, BAM-2989, BAM-2852, and BAM-2983
[2]: Fair disclosure: after I published that original mega-shill, they sent me an insanely great laptop bag, which is now my daily turtle shell. Moral: suck-up to cool companies and they send you great swag.
[3]: Oh, look, there I am again in the comments. I'm everywhere in this release.
[4]: Product by company founded by John Ousterhout, inventor of Tcl. I shall assume my gentle readers all have a strongly held opinion about Tcl one way or the other and say no more. But EC is no Bamboo, even though it's been around a lot longer. Or maybe it's just badly run here.

Thursday, March 12, 2009

qCon Restores My Faith In Tech Conferences

I'm back at my cash money day gig after spending yesterday at qCon London, where I gave a presentation on RESTful approaches to financial data integration. Before I went, I have to say that I was a little down on the conference thing in general, having attended (and spoken at) way too many that were either just Vendor Drivel ("Got a problem? Buy my solution!") [1], or Preaching To The Converted ("Technology/Methodology X is great! Aren't we all so clever for noticing it?"). qCon has largely restored my faith that it's possible to put together an excellent technical conference that's relevant for the way we communicate about technical subjects today.

First of all, having individual tracks have their own coordinators helps with this quite a bit. In my case, the Architectures in Financial Services track was organized by Alexis Richardson and Cleve Gibbon, who did a fantastic job of putting together a quite varied collection of speakers, none of whom was a pure Vendor Shill. In fact, there was only one speaker who even had something to sell you at all, and he actually presented something that was quite interesting from a technology and business perspective, and could have been done with alternate vendor products as well, so it wasn't just a pure sales pitch. We had the CTO of a spread betting company, the head technologists of 3 of the AMQP implementations (RabbitMQ, OpenAMQ, and qPid/Red Hat MRG), some ranting geek, and the head of a statistical arbitrage hedge fund (who codes his own management interfaces in Curses).

Looking at the other tracks, they had a similar quality of curation. The conference had the inventors of Clojure, the Lift Web Framework, Erlang; practitioners/architects at brand-name companies galore; luminaries like Steve Vinoski, Tony Hoare [2], and Cameron Purdy. This concentration of insane brilliance with very few vendors giving thinly disguised pitches is one of the things that makes conferences relevant from a presentation-quality perspective.

My frustration and disassociation largely came from the rise of the internet programming community. Blogs, RSS readers, StackOverflow, Proggit, all of these disintermediated conference organizers from the programming public. Why have to get on a plane three times a year and watch the same old shills (of products and consulting services of course) bore the heck out of you for hours at a time just for the couple of nuggets of useful conversation that you couldn't get any other way? You can just follow the people you're interested in online and skip out all of that.

So how do I think qCon has become relevant? What is it doing well?

  • They understand presenters. Want to speak at JavaOne? Last time I tried, you have to have your slides completely ready months in advance. No last minute changes allowed. Given the velocity at which development is changing, that means that by the time of the show, everything you might want to talk about is irrelevant. At qCon, nobody asks for your slides in advance. Rather, they know that we're all doing last-minute revisions until 2 minutes before you go live. So they just walk up with a USB key while you're still onstage and grab them right there and then.
  • It's not single focused. While I think that there are places for single-focus conferences (whether on programming languages, methodologies, or specific technologies), it's hard. Getting people together requires at least a couple of days of content, and how much content can you put together on one single subject without it getting boring and repetitious (and thus Preaching to the Converted, as the only person who wants to hear 3 days worth of "Scrum/Perforce/Ruby Is Great" is someone who wants their worldview validated by others).
  • It has the right number of tracks.By having about 5 things going on, you always have a choice of something interesting to see, but you're not so spoiled for choice that you get frustrated that you can't go to half the things you find interesting.
  • They grab practitioners. This is extraordinary, because there are a lot of people that showed up to give presentations who aren't prolific bloggers or writers or speakers. You get to have their experience where they have maybe one or two things they can/want to talk about a year. That gives you remarkably deep access to senior technologists that you wouldn't otherwise get access to as part of the online development conversation (the guys who show up and present at every single conference you can't seem to get to shut up sometimes)
  • Free Beer. Seriously, while I expect that tonight there will be many people down t' pub, hosting a free beer night got lots of side conversations going fully lubricated by alcohol. While not a ton of technical content probably got disseminated, people made contacts and generally talked about what they're doing and why.

But it's also that the attitude of the audience, and the technologies around, have also changed. Laptops were open everywhere, and people were quite openly half listening to the talks and doing other things (presenters often working on their slides). Wifi sucked but was available, and people were blogging and emailing and twittering and looking stuff up online about the content of the speeches. That's a sea change.

One of the reasons I stopped liking attending conferences was that if there was a presentation that was boring me, I didn't have ability to do anything else. Leaving was rude, and by the time laptops became widely used, it was considered rude to not pay full attention to the speaker, no matter how rubbish he was. These days it's not (I've been to a number of recent meetings with tech companies where every single attendee has a laptop open on the table and is only half paying attention; this is now normal). That changes everything, because it means that the conference is part of that larger programming community on the internet. qCon isn't an experiential event, it's a nebulous event with a nexus around those people attending in person.

That's critical: there's still value in qCon for those who don't attend. Slides are online immediately; people blog about it during and after; interviews are performed and posted to web sites; presentations are filmed and trickled out over the course of the year; interesting points from most talks are tweeted out. You can still benefit from qCon even if you don't attend, but if you do, you're just a greater part of it, as you're at the core. And that's precisely how qCon to me seemed extremely relevant in the internet communication era.

So if you think conferences suck, go to a qCon one. I'm very glad I did, and only wish I had been able to stay for the whole thing.

Footnotes
[1]: Guilty! When I worked at M7, I was the designated thinly-disguised marketing shill at SDEast and BorCon.
[2]: Turing Award Winner in da hizzy! Also, best quote evar: "QuickSort was only the second algorithm I tried to write." Wow.

Wednesday, March 11, 2009

Slides From qCon London 2009 Presentation

Just finished my talk at qCon London 2009, and I wanted to get the slides up here on the blog ASAP. Will have a bigger writeup on the qCon experience (unfortunately due to cash money job constraints only going to be here today) forthcoming, but I like to get the slides up right away.

Slides for all of you are here:

They're also available on the qCon schedule page already.

Sunday, March 08, 2009

Even My Mom Rejected Vista

When I moved across the pond 5 years ago, I gave my (then 3-year-old) personal computer to my mom. Since then it's pretty much ceased to function properly: it's running Windows 2000 Professional, has never had a virus scanner installed, one of the two 10k RPM Ultra-160 SCSI drives (yes, it was one of my Database Development workstations) is dead, and generically it's full of death. It needs to die. I finally convinced my mom that she should move to a Mac. After all, as near as I can tell, my mom uses her computer for:

  • Pictures of Grandchildren. Viewing them, printing them, saving them, forwarding them.
  • Pictures of Zoo Animals. Ditto above.
  • Surfing the Web. I'm not quite sure what she does, but it's clearly not going to Snopes, as the final major use is
  • Forwarding Chain Emails.

In the run-up to the new Mac Minis finally appearing, when I started to get tired of Steve rejecting all non-laptop customers, I asked her, "if they don't come out with a new Mac Mini, I'm afraid I might have to recommend that you just get a cheap Dell, as your current computer simply isn't going to last." Her response? "No. If I get a new PC I'll have to get Vista."

Yes, the Vista bashing is old hat in the tech community, but this is my 65-year-old mother (who doesn't have the foggiest idea what in the world I actually do in technology at this point, except that I've managed to convince her that no matter how much I worked on technology for structuring CDOs, the current financial meltdown is most certainly not my fault). She has no idea anything about UAC, or driver woes, or file copies never terminating, or any of that. All she knows is "Vista Sucks, Avoid It Like The Plague."

What makes the Microsoft position more tenable is that they've made it abundantly clear that they know that it's a PR disaster (do I actually think it's a terrible operating system? I wouldn't know from first-hand experience to be honest, as I've never run it. I doubt it, to be honest. I think it's probably better than XP in a number of areas, but it's different, without sufficient obvious advantages). They've EOLed XP, even though they know their customers want it. Heck, they'll charge corporate customers extra to install the old OS that they want and that works just fine for them. How insane is that?

Can you imagine any firm other than Microsoft saying "We know that nobody wants this product; moreover, we have no legal or regulatory reason forcing us to sell the new productin favor of the old product that people actually do want; however, we have never admitted an error ever, and so we will not admit that Vista was an error." I mean, this is the company that probably still has people who thought Microsoft Bob was a good idea! Their solution here, in addition to charging you extra for a downgrade, is to run ads essentially telling you "Vista isn't as bad as you think it is." That's not the point. By that point it had the stench of failure to it to the point that My Mom thinks she would rather buy an ancient, overpriced Mac Mini to avoid it.

The move to Windows 7 is a good one: fix what's wrong, break with the past, do it fast. It might be the next version of Windows I'm forced professionally to use.

Because when even my mom says an operating system sucks, you know it has teh suck.

And my mom is the proud owner of her new Mac Mini, which the nice gents at the local Apple Store even populated with her Mozilla 1.5 email and addresses (yes, I said it was ancient).

Meta-Note: Switched to Disqus

As a meta-note, I've switched over the comment system for new posts to Disqus, now that Blogger has finally allowed it without barfing when I update my template. I'll be monitoring for a period to make sure that things aren't too wacky, but please let me know (kirk -at- kirkwylie -dot- com) if you find any problems.

And now back to our regularly scheduled ranting...

UPDATE: The switchover, it does nothing. T0rx is able to do it no problem with effectively the same template, and I can't. Bloody hell. I'm getting the hell off Blogger as soon as I don't have the lurgy.

Tuesday, March 03, 2009

Index Raises Another Round

TechCrunch reports: Index raises an EUR350MM round. TechCrunch UK continues, and is a better analysis from a European perspective.

As I've said before, perfect time to be investing Euros in London-based firms.

Monday, March 02, 2009

Dell, Your Pricing Policies Suck

I'm in the market for a Netbook, and I live in the UK. But as a USAmerican, who cannot for the life of me retrain my fingers to use the UK keyboard layout, I want a US keyboard for it or else there's no point.

I start thinking "Hey, that Samsung NC10 is pretty good, why not get that?" Except no US keyboard available, not even as a replacement part through a service centre. And I really don't want to ship one from Amazon.com.

So I also hear really good things about the Dell Mini 9, and it's dirt cheap, and runs that Linux thing all the kids keep going on about. And as a USAmerican company, surely they can accomodate me? No.

Dell will not sell you a Dell Mini 9 with a US keyboard in the UK, and they will not ship one from USAmerica to a foreign address.

Ahhh, but it turns out that if you know to order the magical part U061H, you can get a replacement keyboard module in US-International layout (which is technically a replacement part for the UK-only Vostro A90), and it's field replaceable. Sounds great. So I figure I'll buy the netbook, and also buy the keyboard module and match them up myself.

Call Dell UK and ask for that part? GBP 49 + VAT + Shipping.

Call Dell US and as for the identical part? USD 14.99 + Shipping. But they won't ship outside the US.

And this is a replacement part for a UK-only machine! They want 25% of the original value of the whole netbook for just the keyboard module.

I mean, I have an out: have my parents buy it and ship it over to me, and I'll still save about GBP 30 from going with the official import route, but Dell, your pricing policies suck and have no earthly justification of any form. I mean, Sterling may be the European Peso, but last time I checked it was still better than GBP 1 == USD 0.306.

Gettin' My GeekSpace On

I believe that every geek needs some space to, well, geek out sometimes. For me, that involves:

  • Doing Open Source coding;
  • Playing around with technologies before I'm getting paid cash money for them;
  • Coding on the systems for the animal conservation charity of which I'm a trustee;[1]
  • Working on any startup ideas that I might have brewing at the moment.[2]

When I was living in Silicon Valley, I always had this type of space. Whether it was a bedroom that I used only for coding and sleeping (and other things requiring a bed), or a guest bedroom I appropriated for a pure geek room, I always had somewhere that I could have a nonstop whir of fans from multiple workstations, lots of monitors [3], and lots and lots of mess without any complaints from the significant other. And then I moved to London and it all ended.

When I first moved here I lived in no fewer than 4 homes in my first year[4]; none of them allowed me to set up a Geek Space. Flats here are small (you pay roughly what you do in San Francisco or Manhattan, but the flat you get for the money is drastically smaller than what you get in San Francisco for the same price [5]). Finally after 2 years we moved into a house where I finally had my Geek Room, but we only lived there for 9 months before we moved into our current home, a minimalist loft.

While a minimalist loft suits us aesthetically, it doesn't combine very well with a geek-friendly workspace. Convincing the other half that the Pawson-esque interior can support a desk covered in cables, books, scraps of paper, and at least 3 monitors (or one huge one) is a pretty hard sell. So I did what most London-based geeks do: I got a powerful laptop.

It's a compelling sell. Get a MacBook Pro and you have a system running Unix with a pretty darn beefy processor and bus. Plus, you can take it out of your cupboard when you want to do work, and put it back when you're done. Surely that's going to be good for coding and supreme geek activities, and support your small-flat aesthetic requriements? Uhm, no. When you're used to a crazy keyboard, a crazy trackball, and at least two 1600x1200 displays, restricting yourself to what you can do on a laptop is virtually impossible to do effectively. Plus, Mac key bindings are bloody retarded for coding (seriously? Steve thinks home/end should mean different things to the rest of the world? Take your mock-turtleneck and shove it, yo). You find yourself thinking to yourself constantly "I could do this faster on a real computer," and that distracts you.

Plus, I'm easily distracted. If I have a room all to myself, I can properly geek it out and ignore the distractions of being at home. If I'm in a corner of the general space I might be more social, but I'm constantly going to be distracted by being at home, and looking to just chill out on the sofa. Not conducive to being geeky.

This didn't impact me quite so much until recently: my previous two cash-money gigs were extremely laid back, and had no problems with my doing my own stuff on down-time as long as I got the job done. In addition, my jobs required me to constantly evaluate new technologies for addition to the company's recommended technology base, meaning I was being paid to play with emerging technologies. Current cash money gig? Not so much.

So when I found out that there was a desk up for rent in a shared office in Shoreditch, I went to meet the guys, and I signed up. I'll have a desk in a room with James Governor, James Stewart, and Matt Patterson. Admittedly they're there mostly during the days, and I'll mostly be there evenings (on the way home) and weekends, but I'm pretty excited about the chance to have a suitable GeekSpace where I can prevent my geek-self from intruding on my home life, and be able to geek out with others (again, previous jobs had more people conducive to doing this during business hours than the current gig). All this for less than I spend on beer in a month.

I think this probably a pretty good model that other geeks in suitably expensive domiciles might want to follow:

  • Find a cheap room reasonably close to where you work, or in between where you work and where you live;
  • Rent the room and network access;
  • Split the costs between the lot of you.
You get the best of all worlds: a space you can geek out, and no home-life strife. What's not to like?[6]

I'll be posting more about this as I get myself fully embedded, and as I learn some good rules on combining an off-site GeekSpace with an at-home life devoid of real workstations. Consider this an experiment in work-life balance for the geek set.

Footnotes

[1]: You'd be shocked at the level of paperwork and data management even a simple zoo would need, and how little of that appears to be general enough that you can pay someone to take the burden off your hands. I'm shocked at how charities a little bigger than we are, without a software engineer available, are able to comply with the requirements.
[2]: Of course officially I have none of these. None at all. Nothing to see here; move along, move along.
[3]: Ever tried to make sure sure an Ikea desk isn't going to collapse under 2 21" CRTs? Can be quite nerve-wracking...
[4]: My personal life got very complicated when I moved here. If you don't know me personally, that's all I'm gonna say; if you do know me, you know what happened.
[5]: 750 square foot for a 2br being a luxury? Seriously? London's that crowded? I don't think so. It's more that the planning system is so insane here that once you get the chance to build anything you cram in as many units as you can in the space. For a place that isn't a concrete canyon/jungle like Manhattan, it's a bit ridiculous, really.
[6]: This isn't relevant to the Silicon Valley set of course. What self respecting Silicon Valley geek doesn't have the ability to setup a Geek Space at home? None.