The European venture-capital industry, too, is less developed than the American one (significantly, in many countries it is called "risk" capital rather than "venture" capital). In 2005, for example, European venture capitalists invested EUR12.7 billion in Europe whereas American venture capitalists invested EUR17.4 billion in America. America has at least 50 times as many "angel" investors as Europe, thanks to the taxman's greater forbearance. Yet for all its structural and cultural problems, Europe has started to change, not least because America's venture capitalists have recently started to export their model. In the 1990s Silicon Valley's moneybags believed that they should invest "no further than 20 miles from their offices", but lately the Valley's finest have been establishing offices in Asia and Europe. This is partly because they recognise [sic.] that technological breakthroughs are being made in many more places, but partly also because they believe that applying American methods to new economies can start a torrent of entrepreneurial creativity.I've been writing about this exact same thing, and I think it's becoming clear to the general VC community: investing money wisely leads to cultural changes that allows you more opportunities for investing in the future. Much as a Valley GP may invest in an untried entrepreneur, expecting that while this one may not pop the next one will, one can view an entire market the same way. Otherwise excellent special report. You should read it; it will make you smarter.
Sunday, March 29, 2009
From The Economist Entrepreneurship Special Report
Wednesday, March 25, 2009
Recruiter Tip #1: Don't Block Your Phone Number
- If I'm actively looking for a job, I may field 5-10 calls from recruiters a day. Some of them I want to talk to, some of them I don't have time for (if I'm rushing to an interview, I want to take a call from the recruiter that put me up for that interview in case it's a change of plans; otherwise, I don't have time for you). Help me figure out whether I want/need to take your call when you make it.
- When you call me, I can easily take your phone number and put it into my phone so that I can call you back easily. If I'm out and about, I don't have paper and pen handy, so just telling me your phone number is largely useless, since I can't take it down. I can call you back much more easily if you give me your number.
- Many recruiters don't leave messages. I want to know who are being annoying and calling me 5 times a day without leaving a message so that I can put them in my bad list. Vice versa, if I get 5 blocked calls and one of them is you and you're thoughtful enough to leave a voice mail, you get kudos.
- I have Visual VM on my iPhone. That doesn't help me replay your message when every single voice mail pending is from Blocked.
- If I can't pick up the phone because I'm in a meeting or something, it makes it very easy for me to see it's you and email you back saying "hey, was this urgent or did you just want to check up on me?" Remember, in finance we have to plan on privacy in advance.
- If I'm a hiring manager, I expect the vast majority of recruiters calling me are cold calling me trying to find out if they can recruit for me. I want to pick up the phone for the recruiters that I'm actively on a search with, while ignoring the ones I'm not. If you're working for me, I want to know that so that I can pick up the call while sending the others to voice mail. So if you have an agreement with my firm, you have no reason not to announce your presence before I pick up.
- Rather than a main line, I want to be able to call you back directly. Particularly if I'm dealing with different recruiters from the same firm, one of whom I want to talk to and the other I don't.
Tuesday, March 24, 2009
Best Effort Delivery Fails At The Worst Time
Background on Best Effort Systems
As a reminder, distributed systems operate a series of broker instances, where each one is located in the same process, VM, or OS as the code participating in the message network. For example, it might be a library that's bound into your process space (like 29West), or it might be a daemon that is hosted on the same machine (like Rendezvous). These distributed components then communicate over a distributed, unreliable communications channel (such as Ethernet Broadcast or IP Multicast). [1] Conventionally, publishers in these systems have no knowledge of the actual consumers, and no access control is thus possible (how can you enforce AuthN or AuthZ if you're just bit-blasting onto a wire?). They simply fire-and-forget in the most literal form, as they're sending over a network protocol that has no concept of acknowledgment. So they issue bits onto the wire, and hope they get to any recipients. Management of these systems involves setting up your networking infrastructure to handle IP multicast properly, or making sure you have sufficient ethernet subnet-to-subnet bridges (rvrd
s if you're working with Rendezvous) to make sure that publishers and consumers can communicate.
The only guarantee you have is one of "best-effort." What this means is that publishers and consumers will do everything reasonable to ensure that messages get from publisher to consumer, without sacrificing speed or adding unnecessary latency into the network. The moment you have a slow consumer, or you have a network glitch, or a publisher goes down, that's it; messages get lost or thrown away, and you don't know about it (most of the time; some systems have an optional "you just lost messages" signal you can receive, but in the worst case your client can't tell that it's lost anything at all). That's because in these scenarios, anything that you can do to increase the application semantics will cost you in performance. If you don't need it, why pay for it?
This is very different from guaranteed-delivery semantics, where the middleware is responsible for throttling and caching such that messages aren't lost unless something really drastic happens; simply throwing away a message because an internal queue was starting to fill up isn't something you expect from those systems.
And what you get out of this is a lot of raw speed. No TCP stacks getting in the way, no sliding windows to deal with ACK messages, no central routing brokers. Fast, low-latency messages. Sure, they're unreliable, but do some interesting stuff with the protocol, and consumers who miss messages can even request the ones that they missed from the publisher (so best-effort can be pretty-darn-good-indeed effort).
This is seductive, and it's particularly seductive to developers. Setting up a simple Rendezvous (or other low-latency, best-effort delivery system) network is something you can do without your IT staff's involvement. There's no discussion of ports and broker hardware and networking topologies and involvement with your companies MOM Ops Team or anything else. Just start banging messages around, and look! It all works! And that's when things get problematic.
Best Effort = Development Win, Production Fail
Because the problem is that Best Effort systems seem much better than that in development. In development, you can run days without dropping a single message, no matter what size it is. In testing, you can run many test iterations without being able to force a message drop [2]. More importantly, you really can't force a message drop, at least not without implementing a lossy Decorator on top of the MOM interfaces. And that's a problem. Developers easily self-justify both of these situations:- Yeah, so 25% of that message handling code isn't covered by my functional tests. Who cares? That's just because like
RuntimeException
handling, it's going to be so rare that there's no point forcing the test. If it was going to be more common, the functional and integration tests would already expose it. - Why would I spend 40% of my time working on 25% of the code that I can't get to naturally execute? That's completely retarded. I need to get back to making my Traders happy by adding more functionality to the system and sticking more data into Excel. Get back to me when the Big Bad Error Case actually happens.
- In development, you're the only app running that is using the specified MOM system (because like most developers, you work on one system at a time). In production, your users probably have three different apps all stressing the underlying networking infrastructure for the same MOM system.
- In development, your networking system is probably under minimal load; aside from accessing your SCM system and streaming MP3s to your colleagues and reading tech blogs and tweeting [3], you're not really taxing it; your production users are pushing their networking hardware as hard as they can (mostly loading and saving 100MB Excel files to network stores, but that still eats bandwidth).
- In development, you typically have single use cases: either big messages or small ones, but not mixed; your production users have them mixed all around.
- Production users and systems have a lot more running on them (if traders, they'll have Bloomberg and Reuters and 8 Excel processes and 2 trading systems and your apps all on the same machine). Under peaks, everything suffers simultaneously as resources that otherwise are plentiful become quite scarce.
- In development, if your machine crashes, you have a lot more to worry about than whether that test case ran successfully; in production, machines crash and die all the time and the only thing your users care about is getting themselves up and running as fast as possible. [4]
- For intra-data-center communications, your development machines are used for coding and compiling and testing; your production machines are churning through messages 24x7 and are in a much noisier set of switches usually. That has effects that you can't evaluate, because you're not allowed to play on the production hardware.
Effective Use of Best Effort Systems
So how do you effectively use distributed/low-latency/best-effort MOM systems?- Always make sure your application has a non-best-effort delivery mechanism. Most of the problems with best-effort systems only affect you if you think your application should only have one MOM system, and you know that parts of your application are best suited to a distributed MOM infrastructure. If you are ever using a best-effort MOM system, start with a guaranteed-delivery MOM system and downgrade certain traffic accordingly. Never start in the other direction.
- Don't ever use a best-effort system for a human-interacting environment (though the back-end processors for a human front-end system is fine). Seriously. You don't need it (humans can't detect latency below 100ms anyway), and you're just going to thrash their PC. Don't do it.
- Make sure messages are as idempotent as possible. While tick delivery systems based on deltas are extremely useful, try to make them as prone to interpretation assuming individual message loss as possible. For example:
- Base all delta ticks on deltas from start of day, not from last tick (because you might lose the previous tick)
- When shipping only those fields which have changed, also ship the most important other fields (so if shipping just a change to the Ask Size, also ship the basic Bid/Ask/Trade sextuplet of data)[6]
- Every time you publish or subscribe using a best-effort system, ask yourself "What happens if this message disappears completely?" If the answer is not "whatever," don't use that system and/or upgrade your semantics.
- Any time you are using these systems, have a dedicated network completely shut off from other networking uses. If you have multiple major use cases (think: equities and fixed income, or tick delivery and analytics bundles) on the same machine, use completely different hardware for those as well. Combining traffic on the same network interface is the biggest cause of message loss that I've seen. If your networking team can't/won't do this, get a new networking team.
- Have those network interfaces (one per message context) hooked up to the same physical switching fabric wherever possible to minimize the amount of potential loss in inter-switch communications.
If you think you want to use a distributed MOM system, you probably don't. If you know you do, only use it for server-to-server communications, and make your physical layout match your logical layout as much as possible.
Conclusion
This essay will probably seem pretty down on best-effort systems; that's intentional. I think these systems have their place still, but for most systems their place is back in history, when switches were unnecessary latency additions and broker-based MOM systems added yet more latency and hardware-based TCPoEs didn't exist. If you're giving up 500ms of latency by putting a broker in between publisher and consumer, you'd be stupid to do so if you can avoid it. But that's not the case anymore. Modern brokers can measure their additional latency in microseconds with far better than broadcast/multicast-based best-effort delivery; if you can do that well with a broker in the middle, why sacrifice application semantics to remove it? That's where I think the future in this space is with systems like 29 West and not with systems like Rendezvous: 29 West is adding interesting functionality that you want, and better (think: non-publisher-host-based) resilience, while still having a non-broker-based infrastructure. While I still think it's fringe, Rendezvous is just stalled as a product, and it's not going anywhere. [7] So the next time you're sitting there coding against Rendezvous and thinking "you know, let's face it, this best-effort thing is a sham; all my messages show up!" just remember: the moment you assume that, it's going to bite you in the ass. And it'll probably happen on the last day of the quarter when volatility is up 25% and your head trader is having a down day. And then you'll wish you had sacrificed those 10us of latency for better-than-best-effort delivery.Footnotes:
[1]: In JMS terms, imagine an embedded JMS driver which communicates without a broker, just through IP broadcast and multicast with every other JMS driver on the network.[2]: Even worse, because many distributed tests will launch all components on one machine, you never exercise the network effects that distributed MOM systems entail. This makes sense from a development efficiency perspective, but is particularly problematic in a distributed MOM situation, because the chances of dropping a message fall to pretty much 0 in such a scenario.
[3]: Assuming you don't work for Big Bank B, where all such forms of non-authorized-communications are Strictly Prohibited.
[4]: This is a particular worry if your Best Effort to Better Than Best Effort upgrade is based on disk or other publisher-side mechanisms: what happens when the publisher's machine goes down completely? Or, what happens if it's mostly down but the fast way to resolve the situation to your ops team is to swap it out before it fully fails?
[5]: No, I'm not going to specify this more fully, because it's so prone to variation based on the runtime environment. Just trying to say that if you drop one message a day in dev, assume at least 5 per endpoint per day. Or more.
[6]: Having periodic snapshot ticks (so adding the sextuplet to the tick that happens every 5 seconds, or manually pumping a snapshot every 5 seconds under low activity) is one alternative here, but what happens if you lose the snapshot tick?
[7]: Except, bizarrely, into the realm of broker-based MOM by virtue of the Solace-powered Tibco Messaging Appliance.
Thursday, March 19, 2009
AMQP: Community, Patents, Openness
First of all, a lot of the confusion and requirements of clarification come down to the precise nature of the patent grants that are part of joining the AMQP Working Group. Here's the problem: they're not public. More importantly, the definition of the term "the Specification," which appears to cause the biggest problems, isn't public. I searched on the AMQP web site, and they're not there. If I'm wrong, please let me know, but I sure as heck don't see them anywhere. That makes it very difficult for anyone on the outside to verify any of the claims that people are making. I can't imagine that this is that commercially sensitive, heck, the W3C license grants are all public. Putting this into the public would solve a lot of problems here. You Can't Steal People's Patents
Someone commented to me "well, why not just expand the spec every time a patent comes out?" This is prone to the precise "proprietary vendor fear" that the existing patent would be prone to. Let's look at the hardware vendors. People like Tervela and Cisco, who are members of the AMQP Working Group, and Solace, who I've urged to become a member, have patents. And unlike the AMQP+XQuery patent, some of theirs actually are things that I think would be worthy of patenting. Having a policy that says "If you join the AMQP Working Group, we can render any of your patents moot by simply saying 'what your patent covers is now an Optional Extension to the AMQP Specification'" is just as much a kiss of death as saying "here's a patent that we filed that means if you add AMQP to your proprietary software product we'll sue you." This matters because, as I've said repeatedly, if AMQP does not end up encompassing the existing proprietary MOM vendors, it's not going to win. Disclose What You Can't Disclose
The Red Hat press release, in particular, was taken by many people (some who had nothing to do with AMQP and were neutral third-parties) as unnecessarily hostile, mostly because of what wasn't said. And I know that Red Hat employees have been completely absent from all public communications (excluding anonymous ones). These will be carefully orchestrated by the lawyers. That's a matter of board-level policy. Here's what the board of any FOSS vendor who's dealing with patents needs to do: lay out the ground rules for what their employees can say and do, and tell the rest of us. Unless you do that, when you don't make a comment about a particular area, the rest of the world doesn't know why. Is it because there is malicious intent? Is it because your lawyers won't let you? Is it because you didn't deem it necessary? We don't know. We want to know, and I can't see a single rational basis for saying "we cannot tell you what we can and cannot talk about, even in the most generic terms." Meet us half-way. Open Standards Are Exactly That
People participating in this process should expect that this is an Open Standard, and Open Standards are messy. They're delightfully, angrily, bloggingly, twitteringly messy. You think this little incident was unacceptably messy? You been following the closure debates in Java? What about HTML 5 or XHTML 1? ODF/OOXML? My point here is that open standards are, by their very nature, subject to a large group of people piping up from the sidelines. They won't always say what you want, and they won't always behave the way you want. And that's still valuable, because they add content. This probably just happens to be the first time that I know of that AMQP has been prone to a big public fight. Get used to it. This is actually a Good Thing. How many people do you think actually know what AMQP is now versus a week ago? How many people do you think are following what's happening in the community now versus a week ago? Public fights are signs that a community is healthy. Otherwise you end up with an ANSI Committee of the Good And Worthy And Irrelevant, and nobody listens to them anymore (I have my "pet committees" there, and not all of them are like this, but we all know some like these). They're closed, they deliberate in private for years, and only then do they hand something down from on high that the industry ignores because it's moved on in its absence. Debate, unruly as it is, means you're doing something right. A Way Forward
I'd like to be so bold as to propose a constructive suggestion on a way to move forward. These are entirely my ideas and I've not discussed them with anybody yet.[2]
- Publish key licensing definitions. I know that some components of membership may be legally or commercially sensitive. However, if we (as users and developers) are going to trust that everything is 100% on the level, we have to know the key terms under which patents are handled. In particular, we need to know core definitions for things like "the Specification."
While many of these terms may seem self evident from someone in the PMC, they aren't for those of us who aren't. And that information disparity can lead to a lot of argumentation as people argue from different levels of knowledge. Level the playing field, even if only a little bit. - Determine an optional but interoperable feature policy. This should be for things that are not part of the required specification (and for the record, I strongly favor a minimal specification), where you still want interoperable implementations of common scenarios. This is where the Red Hat AMQP+XQuery patent would lie, because if I want to do XML-based CBR, I most certainly don't want to have 8 versions that are all just a tiny bit different to avoid each other's patents. Same with geospatial routing, dynamic multi-hop routing, whatever.
- Assemble a clearing house for prior art. There's a lot of prior art in messaging that can hopefully develop into a massive database that can be used for the community to collectively fight against patents from non-members. Let's get this going. I don't know how to do this, but I'm sure there's a way.
- Communicate more openly. Usually in a situation like this from another spec or working group, people from the spec committee would be talking constantly with the community in a constructive fashion. That didn't happen here. There was nothing public from the PMC for days while the process deteriorated until members of the PMC started making very inflammatory statements in public.
As I mentioned earlier, you have to expect stuff like this is going to happen, and at least one or two people have to be empowered to talk openly and early, and need to do so. AMQP is turning the corner from something that was in a vendor niche to something that people are starting to Really Care About, and that means that you have to start dealing with people who aren't on the PMC on a regular basis, whether you like it or not.
And some of those will be differences of opinions between PMC members which will spill out into the public domain. This is normal and healthy, and as long as people can remain civil and constructive, there's no reason why all communications should be limited to PMC-only. That's not constructive, because you will never effectively present a public face, so you have to allow for some public discussions when the PMC doesn't have a standard, quickly available response.
[1]: This is all from purely publicly available information.
[2]: For the first and last time, I am not a motherfarking sock puppet. If you think there's ever been anyone who can control what I say, you're sorely mistaken. Employers (e.g. Big Bank B) can tell me to not talk at all, but I'm not the type of person who will just repeat a party line with a straight face if I don't agree.
Wednesday, March 18, 2009
AMQP: Why Do I Care?
- IBM MQ
- SonicMQ
- Tibco Rendezvous
- ActiveMQ
- An in-house developed MOM broker at Derivatives Group A
- An in-house developed MOM broker at Big Bank B
- deal with another application that can send out messages to Broker Technology A, where I'm using (or want to use) Broker Technology B;
- be stopped from using Interesting Technology C because the Broker Technology I have to connect to doesn't support it;
- be nearly into production with a system, only to find out that the client library has a hideous bug that I can't fix because it's a proprietary protocol.
[1]: REMOVED (see update at top of post).
[2]: Not using Java? So sorry, there's nothing equivalent in any other language, and each vendor decides how to map JMS onto C++ in a completely different way.
[3]: Okay, I let Hans Jesperson from Solace buy me a hot chocolate. Guilty!
Tuesday, March 17, 2009
Red Hat's AMQP+XML Patent Press Release
Although there have been some recent questions about one of our patent applications relating to the AMQP specification, they appear to originate in an attempt to spread FUD. There’s no reasonable, objective basis for controversy. In fact, the AMQP Agreement, which we helped to draft, expressly requires that members of the working group, including Red Hat, provide an irrevocable patent license to those that use or implement AMQP. In other words, even if we were to abandon our deeply held principles (which we will not), the AMQP Agreement prevents us from suing anyone for infringing a patent based on an implementation of AMQP specification. Moreover, our Patent Promise applies to every patent we obtain. Red Hat’s patent portfolio will never be used against AMQP, and Red Hat will support any modifications to the specification needed in future to verify that fact.The factual inaccuracy here comes from the fact that the AMQP Agreement covers an irrevocable patent license for any patents necessary to implement the specification. AMQP XQuery Exchanges aren't part of the single document AMQP Specification. Therefore, the AMQP Agreement doesn't factor in. Therefore, this patent is not subject to an automatic grant. Remember, this patent doesn't cover AMQP itself (and nobody's ever claimed that it does). Rather, it patents a quite obvious extension to AMQP. That's the problem: it's extension rather than core. The only way around this at the moment given the licensing of the AMQP specification would be to add XQuery-based Exchanges to the AMQP specification (make it core rather than extension). However, since most work on the AMQP specification is to make it smaller and focus on standardized/interoperable extensions, this isn't a great solution. In short, not impressed with the response. I didn't expect more, but this isn't a great state of play for those of us who care about efficient, vendor neutral message oriented middleware. I am aware that the AMQP Working Group is working within themselves to try to resolve this situation, and I wish them all the best luck in that!
Monday, March 16, 2009
Red Hat's AMQP+XML Patent Application: Stupid, Shifty, Short-Sighted
I've finally gotten word back from my moles that Red Hat has essentially stated that Patent Application 20090063418 is a defensive patent on an extension to AMQP to work around the broken US patent system. This patent was filed for in 2007 [1]. The current state of it is that it's been opened up to public review, which is part of the new patent system process for allowing public feedback on "controversial" patents. Essentially, as of now, infringement rights have started (so if the patent actually is granted, if you infringe as of now, they can go after you), but the patent has not been granted. This process is set up so that you can object to the patent, usually via prior art (so if, for example, you were running AMQP with XML before 2007, or you have documentation in the public space that this was a known concept for extension, please let me or the patent office know ASAP). The way the IP rights in the AMQP working group operate is that each member must agree to disclose and grant rights on any IP that they might hold that would be necessary to implement the AMQP specification, so that the specification isn't prone to stealth patents after ratification. So if Red Hat had gone for a patent on Direct Exchange Routing, they would have had to disclose that and license anybody else in the working group, rendering the patent effectively useless. However, because XML-based exchanges, while the most common example for a custom Exchange type in the AMQP literature, are not necessary for implementation of the spec, Red Hat were quite freely able to stealth file this patent. While legally they are thus in their rights to file the patent, this is a really bad maneuver on the part of Red Hat. Let's assume that it is in fact a defensive patent, as Red Hat are claiming. If it is, why not disclose it? And who are they attempting to defend, the wider AMQP community or themselves in a patent war with IBM or someone else? If they're worried that someone else was going to come in and patent this precise thing, why not just put prior art into the wild so that the community could oppose anybody attempting to patent just this thing? Let's take worst-case scenario here:
- Solace Systems decides to implement AMQP in hardware
- They already have a hardware XML-based routing system (equivalent to the patent when not applied to AMQP)
- They release the product into the wild
- Red Hat realizes that they're not selling much MRG, because people prefer the Solace solution
- Red Hat sues Solace for damages
We are suspending any work on AMQP support in 0MQ till the issue is resolved.In effect, this completely poisons the well of the AMQP community, and I fail to see how, short of Red Hat granting control over the patent to some other entity, they can recover from this. It would not surprise me if this effectively kills AMQP, and with it the entire basis for the messaging component of MRG. In sum, this was a really, really, really short-sighted move on the part of Red Hat, and I highly encourage the community to fight this patent on grounds of Obviousness and Prior Art, both of which I think it fails in. And then to never trust Red Hat again in any standardization process. Footnotes:
[1] : This effectively means that pretty much the first thing they did on getting involved with AMQP was to file patents on it. What a lovely approach! UPDATE 2009-03-17
Red Hat has come out with a press release on this issue. Link to the press release and my response in my follow-up post on the subject.
Sunday, March 15, 2009
USPTO #20090063418 - Yes, I'm Aware
- I know of the patent application;
- I'm apoplectic with rage (like 3 on the Kirk anger scale[1]);
- I'm swallowing that anger until I hear a good explanation for the patent application.
[1]: A scale invented by ex-coworkers. Most people register in milli-Kirks at best. Yes, this is "Kirk's Rants" for a reason. :-)
Friday, March 13, 2009
Bamboo 2.2 - This is a Game Changer
- Run all your build agents under the Bamboo process. Not an option if you have to work with a heterogeneous environment, so I'll consider this the Hobbyist approach.
- Stick build agents on your various development machines as-and-when they match the exact configuration that you want to be able to test under. Doing this, though, will unpredictably mess with the machine's performance as Bamboo decides to take away all the resources to run a build-and-test.
- Push VMWare deep into the organization so that you can do your own VMWare instances to match the configurations you want to test against. Doing this, though, means that you have to have these instances up and running pretty much all the time, whether you're testing against them or not.
- Buy hardware just for testing. Uhm, haven't you heard? Capex is SOOO 2007, particularly for development infrastructure. Spend your Capex on 30" displays, yo.
[1]: For those counting: BAM-3177, BAM-1831, BAM-2989, BAM-2852, and BAM-2983
[2]: Fair disclosure: after I published that original mega-shill, they sent me an insanely great laptop bag, which is now my daily turtle shell. Moral: suck-up to cool companies and they send you great swag.
[3]: Oh, look, there I am again in the comments. I'm everywhere in this release.
[4]: Product by company founded by John Ousterhout, inventor of Tcl. I shall assume my gentle readers all have a strongly held opinion about Tcl one way or the other and say no more. But EC is no Bamboo, even though it's been around a lot longer. Or maybe it's just badly run here.
Thursday, March 12, 2009
qCon Restores My Faith In Tech Conferences
- They understand presenters. Want to speak at JavaOne? Last time I tried, you have to have your slides completely ready months in advance. No last minute changes allowed. Given the velocity at which development is changing, that means that by the time of the show, everything you might want to talk about is irrelevant. At qCon, nobody asks for your slides in advance. Rather, they know that we're all doing last-minute revisions until 2 minutes before you go live. So they just walk up with a USB key while you're still onstage and grab them right there and then.
- It's not single focused. While I think that there are places for single-focus conferences (whether on programming languages, methodologies, or specific technologies), it's hard. Getting people together requires at least a couple of days of content, and how much content can you put together on one single subject without it getting boring and repetitious (and thus Preaching to the Converted, as the only person who wants to hear 3 days worth of "Scrum/Perforce/Ruby Is Great" is someone who wants their worldview validated by others).
- It has the right number of tracks.By having about 5 things going on, you always have a choice of something interesting to see, but you're not so spoiled for choice that you get frustrated that you can't go to half the things you find interesting.
- They grab practitioners. This is extraordinary, because there are a lot of people that showed up to give presentations who aren't prolific bloggers or writers or speakers. You get to have their experience where they have maybe one or two things they can/want to talk about a year. That gives you remarkably deep access to senior technologists that you wouldn't otherwise get access to as part of the online development conversation (the guys who show up and present at every single conference you can't seem to get to shut up sometimes)
- Free Beer. Seriously, while I expect that tonight there will be many people down t' pub, hosting a free beer night got lots of side conversations going fully lubricated by alcohol. While not a ton of technical content probably got disseminated, people made contacts and generally talked about what they're doing and why.
[1]: Guilty! When I worked at M7, I was the designated thinly-disguised marketing shill at SDEast and BorCon.
[2]: Turing Award Winner in da hizzy! Also, best quote evar: "QuickSort was only the second algorithm I tried to write." Wow.
Wednesday, March 11, 2009
Slides From qCon London 2009 Presentation
Sunday, March 08, 2009
Even My Mom Rejected Vista
When I moved across the pond 5 years ago, I gave my (then 3-year-old) personal computer to my mom. Since then it's pretty much ceased to function properly: it's running Windows 2000 Professional, has never had a virus scanner installed, one of the two 10k RPM Ultra-160 SCSI drives (yes, it was one of my Database Development workstations) is dead, and generically it's full of death. It needs to die. I finally convinced my mom that she should move to a Mac. After all, as near as I can tell, my mom uses her computer for:
- Pictures of Grandchildren. Viewing them, printing them, saving them, forwarding them.
- Pictures of Zoo Animals. Ditto above.
- Surfing the Web. I'm not quite sure what she does, but it's clearly not going to Snopes, as the final major use is
- Forwarding Chain Emails.
In the run-up to the new Mac Minis finally appearing, when I started to get tired of Steve rejecting all non-laptop customers, I asked her, "if they don't come out with a new Mac Mini, I'm afraid I might have to recommend that you just get a cheap Dell, as your current computer simply isn't going to last." Her response? "No. If I get a new PC I'll have to get Vista."
Yes, the Vista bashing is old hat in the tech community, but this is my 65-year-old mother (who doesn't have the foggiest idea what in the world I actually do in technology at this point, except that I've managed to convince her that no matter how much I worked on technology for structuring CDOs, the current financial meltdown is most certainly not my fault). She has no idea anything about UAC, or driver woes, or file copies never terminating, or any of that. All she knows is "Vista Sucks, Avoid It Like The Plague."
What makes the Microsoft position more tenable is that they've made it abundantly clear that they know that it's a PR disaster (do I actually think it's a terrible operating system? I wouldn't know from first-hand experience to be honest, as I've never run it. I doubt it, to be honest. I think it's probably better than XP in a number of areas, but it's different, without sufficient obvious advantages). They've EOLed XP, even though they know their customers want it. Heck, they'll charge corporate customers extra to install the old OS that they want and that works just fine for them. How insane is that?
Can you imagine any firm other than Microsoft saying "We know that nobody wants this product; moreover, we have no legal or regulatory reason forcing us to sell the new productin favor of the old product that people actually do want; however, we have never admitted an error ever, and so we will not admit that Vista was an error." I mean, this is the company that probably still has people who thought Microsoft Bob was a good idea! Their solution here, in addition to charging you extra for a downgrade, is to run ads essentially telling you "Vista isn't as bad as you think it is." That's not the point. By that point it had the stench of failure to it to the point that My Mom thinks she would rather buy an ancient, overpriced Mac Mini to avoid it.
The move to Windows 7 is a good one: fix what's wrong, break with the past, do it fast. It might be the next version of Windows I'm forced professionally to use.
Because when even my mom says an operating system sucks, you know it has teh suck.
And my mom is the proud owner of her new Mac Mini, which the nice gents at the local Apple Store even populated with her Mozilla 1.5 email and addresses (yes, I said it was ancient).
Meta-Note: Switched to Disqus
As a meta-note, I've switched over the comment system for new posts to Disqus, now that Blogger has finally allowed it without barfing when I update my template. I'll be monitoring for a period to make sure that things aren't too wacky, but please let me know (kirk -at- kirkwylie -dot- com) if you find any problems.
And now back to our regularly scheduled ranting...
UPDATE: The switchover, it does nothing. T0rx is able to do it no problem with effectively the same template, and I can't. Bloody hell. I'm getting the hell off Blogger as soon as I don't have the lurgy.
Tuesday, March 03, 2009
Index Raises Another Round
TechCrunch reports: Index raises an EUR350MM round. TechCrunch UK continues, and is a better analysis from a European perspective.
As I've said before, perfect time to be investing Euros in London-based firms.
Monday, March 02, 2009
Dell, Your Pricing Policies Suck
I start thinking "Hey, that Samsung NC10 is pretty good, why not get that?" Except no US keyboard available, not even as a replacement part through a service centre. And I really don't want to ship one from Amazon.com.
So I also hear really good things about the Dell Mini 9, and it's dirt cheap, and runs that Linux thing all the kids keep going on about. And as a USAmerican company, surely they can accomodate me? No.
Dell will not sell you a Dell Mini 9 with a US keyboard in the UK, and they will not ship one from USAmerica to a foreign address.
Ahhh, but it turns out that if you know to order the magical part U061H, you can get a replacement keyboard module in US-International layout (which is technically a replacement part for the UK-only Vostro A90), and it's field replaceable. Sounds great. So I figure I'll buy the netbook, and also buy the keyboard module and match them up myself.
Call Dell UK and ask for that part? GBP 49 + VAT + Shipping.
Call Dell US and as for the identical part? USD 14.99 + Shipping. But they won't ship outside the US.
And this is a replacement part for a UK-only machine! They want 25% of the original value of the whole netbook for just the keyboard module.
I mean, I have an out: have my parents buy it and ship it over to me, and I'll still save about GBP 30 from going with the official import route, but Dell, your pricing policies suck and have no earthly justification of any form. I mean, Sterling may be the European Peso, but last time I checked it was still better than GBP 1 == USD 0.306.
Gettin' My GeekSpace On
I believe that every geek needs some space to, well, geek out sometimes. For me, that involves:
- Doing Open Source coding;
- Playing around with technologies before I'm getting paid cash money for them;
- Coding on the systems for the animal conservation charity of which I'm a trustee;[1]
- Working on any startup ideas that I might have brewing at the moment.[2]
When I was living in Silicon Valley, I always had this type of space. Whether it was a bedroom that I used only for coding and sleeping (and other things requiring a bed), or a guest bedroom I appropriated for a pure geek room, I always had somewhere that I could have a nonstop whir of fans from multiple workstations, lots of monitors [3], and lots and lots of mess without any complaints from the significant other. And then I moved to London and it all ended.
When I first moved here I lived in no fewer than 4 homes in my first year[4]; none of them allowed me to set up a Geek Space. Flats here are small (you pay roughly what you do in San Francisco or Manhattan, but the flat you get for the money is drastically smaller than what you get in San Francisco for the same price [5]). Finally after 2 years we moved into a house where I finally had my Geek Room, but we only lived there for 9 months before we moved into our current home, a minimalist loft.
While a minimalist loft suits us aesthetically, it doesn't combine very well with a geek-friendly workspace. Convincing the other half that the Pawson-esque interior can support a desk covered in cables, books, scraps of paper, and at least 3 monitors (or one huge one) is a pretty hard sell. So I did what most London-based geeks do: I got a powerful laptop.
It's a compelling sell. Get a MacBook Pro and you have a system running Unix with a pretty darn beefy processor and bus. Plus, you can take it out of your cupboard when you want to do work, and put it back when you're done. Surely that's going to be good for coding and supreme geek activities, and support your small-flat aesthetic requriements? Uhm, no. When you're used to a crazy keyboard, a crazy trackball, and at least two 1600x1200 displays, restricting yourself to what you can do on a laptop is virtually impossible to do effectively. Plus, Mac key bindings are bloody retarded for coding (seriously? Steve thinks home/end should mean different things to the rest of the world? Take your mock-turtleneck and shove it, yo). You find yourself thinking to yourself constantly "I could do this faster on a real computer," and that distracts you.
Plus, I'm easily distracted. If I have a room all to myself, I can properly geek it out and ignore the distractions of being at home. If I'm in a corner of the general space I might be more social, but I'm constantly going to be distracted by being at home, and looking to just chill out on the sofa. Not conducive to being geeky.
This didn't impact me quite so much until recently: my previous two cash-money gigs were extremely laid back, and had no problems with my doing my own stuff on down-time as long as I got the job done. In addition, my jobs required me to constantly evaluate new technologies for addition to the company's recommended technology base, meaning I was being paid to play with emerging technologies. Current cash money gig? Not so much.
So when I found out that there was a desk up for rent in a shared office in Shoreditch, I went to meet the guys, and I signed up. I'll have a desk in a room with James Governor, James Stewart, and Matt Patterson. Admittedly they're there mostly during the days, and I'll mostly be there evenings (on the way home) and weekends, but I'm pretty excited about the chance to have a suitable GeekSpace where I can prevent my geek-self from intruding on my home life, and be able to geek out with others (again, previous jobs had more people conducive to doing this during business hours than the current gig). All this for less than I spend on beer in a month.
I think this probably a pretty good model that other geeks in suitably expensive domiciles might want to follow:
- Find a cheap room reasonably close to where you work, or in between where you work and where you live;
- Rent the room and network access;
- Split the costs between the lot of you.
I'll be posting more about this as I get myself fully embedded, and as I learn some good rules on combining an off-site GeekSpace with an at-home life devoid of real workstations. Consider this an experiment in work-life balance for the geek set.
Footnotes
[1]: You'd be shocked at the level of paperwork and data management even a simple zoo would need, and how little of that appears to be general enough that you can pay someone to take the burden off your hands. I'm shocked at how charities a little bigger than we are, without a software engineer available, are able to comply with the requirements.
[2]: Of course officially I have none of these. None at all. Nothing to see here; move along, move along.
[3]: Ever tried to make sure sure an Ikea desk isn't going to collapse under 2 21" CRTs? Can be quite nerve-wracking...
[4]: My personal life got very complicated when I moved here. If you don't know me personally, that's all I'm gonna say; if you do know me, you know what happened.
[5]: 750 square foot for a 2br being a luxury? Seriously? London's that crowded? I don't think so. It's more that the planning system is so insane here that once you get the chance to build anything you cram in as many units as you can in the space. For a place that isn't a concrete canyon/jungle like Manhattan, it's a bit ridiculous, really.
[6]: This isn't relevant to the Silicon Valley set of course. What self respecting Silicon Valley geek doesn't have the ability to setup a Geek Space at home? None.