I've been involved in quite a few technical interviews over the years, and one thing that invariably (and rightfully) comes up is the programming problem. I view this as an opportunity to see if someone can actually code, rather than just talking about it. The real issue here is that the interview room is not the same environment as a developer will be using: you don't have a keyboard in front of you, you don't have all your tool suite available, at best it's a first order approximation.
That being said, I think any technical interview which doesn't provide the interviewer with a chance to evaluate the candidate's actual coding skill is just wasted time; while we need to think, write, and discuss, software engineers are paid to provide working code. Your job as an interviewer is to determine whether the candidate can do that subject to your standards and requirements. Your job as a candidate is convince the interviewer of that fact.
So what are the basic ways in which I've seen this hurdle laid out?
Read Me The Code
This is the most obscure one, but it was part of my phone screen for a job at Google back in 2002 (disclosure: I was offered the job, and like an idiot, turned it down). Essentially, I was given a programming problem over the phone, given about 10 minutes while I was on the phone to write down a solution, and then had to read it out to the developer on the other end.
I didn't really get this as a concept, and I still don't. Is he mentally picturing the code as I write it? Writing it down himself to make sure that I'm getting the semicolons and braces correct? Why even bother? I'm not a fan of this one because I don't think it actually tells you anything about the candidate other than whether he can get his hand around describing syntactic elements of programming concisely over the phone.
I've chalked it up to an obscure Google-ism, or an interviewer experimenting with a new interviewing style (we've all done it).
Whiteboard/Pen-and-Paper
This is the most common form, and in it the candidate and the interviewer get together in a room and the candidate writes some code longhand, whether on paper or a whiteboard or anything else. I think that this is a relatively valuable exercise for simple programming problems, because if you can't write out something that works elegantly for a simple problem, then you probably don't have an excellent grasp of your underlying language.
But the key thing here is that you have to engage in a dialog with the candidate about their solution, particularly if you think that there are tools or techniques or language features that they didn't employ. I've done this several times when someone had a strange solution only to find out that they didn't remember the exact syntax for using a language/library feature, and so they wanted to make sure that they did something that they knew to be correct. Because you don't know what the interviewer is looking for (and how much they'll mark off for mixing a brace or semicolon), you are at a loss to know what precisely to optimize for.
Furthermore, you have to be clear about what you're after here. Some problems are more conceptually difficult, and ideally the interviewer should be looking for your thought processes, and whether you can come up with a coded solution to the problem (damn the curly braces). Other problems are far more simple conceptually, and the interviewer should be seeing if you can write up a simple routine that is production quality longhand. Where things go awry is when interviewers want you to simultaneously come up with a solution to a really complex problem, and have every single syntactical element and edge case perfect in the first go. Gotcha-problems fit into this area as well (As a candidate, I get it; you've solved this problem in the past; you're ever so clever, and I should be thrilled to work with someone so brilliant as you; did this really teach you whether I can work with you effectively?).
The main problem with both of these approaches, though, is that they're not realistic. You don't write code by hand anymore. You definitely don't read it over the phone. These exams test (to some extent) your raw programming ability and familiarity with the syntax and libraries of your programming languages, but they don't show how well you can actually engineer software. For that, you need one of the following methodologies.
In Advance Programming Problem
I employed this as a low-pass filter while interviewing candidates at a previous employer, and it worked out pretty well. Rather than doing a phone screen at all, we had a relatively simple programming problem (in our case, XML walking) that we wanted a compilable, runnable solution to. Shouldn't have taken anybody competent more than an hour to do, and we wanted to see the style and nature of their code. What tools did they use? What language features? How did they document? How did they break up the problem? How did they package up the result?
This served two purposes: firstly, it weeded out the chaff that didn't really want to meet with us at all (if you think you're doing us a favor by even talking to us, you're not going to want to program in advance); secondly, it showed us a real-world example of what you can do with your chosen tool suite. Because it was so small, there were no excuses for providing something that didn't compile, didn't run, didn't produce correct output, or was ugly (all of which we got). Real software engineering involves spending time on niceties that aren't about the basic code execution path. Did you care enough to provide us with something that shows that you can do those?
Pair Programming Problem
Another financial services firm used this with me, and was the first time I saw it. Essentially, I came over to the interviewers computer, was asked which IDE I wanted to use (this was for a Java job, so I had my choice of Eclipse or IDEA), and he opened it up empty (new workspace for you Eclipse people). He then gave me the programming problem, and watched and talked to me as I worked through the entire end-to-end process in front of him.
I really liked this at the time. It was far more realistic than pen-and-paper could have possibly been, and also showed him how optimized I had gotten my workflow with my tool suite. It also allowed us to explore far more serious software engineering concepts like unit testing (I wrote the tests as I went along) and test coverage (making sure that my tests covered all inputs and results). I think this is actually an extremely strong way to check how well a programmer can work in a real-world scenario, and is far better than pen-and-paper development for languages like C#, Python, and Java, which have far simpler project setup times (though I think you could do it for a simple C/C++ test as well). That's the key thing here: you're watching a developer work with a real-world tool suite.
Programming Death Race
This was the most extreme form of any of these that I've ever had to do, and was extremely intense. I was given a Java file which contained four empty method bodies with documentation about what they were supposed to do, and told which ones I had to fill in. I then had 60 minutes to do two programming problems, returning the resulting Java source file, which they had an automated system to compile and run through a test suite.
These were not simple, reverse-a-linked-list problems. Each of them could easily have taken far more than an hour to do. And they were conceptually difficult as well (I'm not going to disclose the precise problems because I don't want to ruin their methodology), meaning that you have to actually think. A lot. There's not a lot of time to properly think and code a really complex solution when you know you're under a deadline!
While I completed the test successfully and got past that stage of interviews, I couldn't help thinking that it was all a bit much. I ended up providing a solution to one of the problems that was fully functional, but didn't actually look as elegant as I would have liked, because of the serious time pressure. Perhaps part of their goal was to see how fast you can get your head around a complex mathematical problem, but I felt like what it was rewarding was how fast you can churn out a solution, regardless of the underlying efficiency or elegance.
The firm was quite open that they would far rather have a hundred false negatives than a single false positive, which is why they intentionally made it so difficult, but something to me still said that proving someone can function at that level of intensity isn't necessarily the same as that someone can function on a consistent marathon pace. Then again, in financial services, if you can't function that quickly when things are going wrong and you're losing money every minute, it's probably a bonus for the employer to know that.
My Recommendation
I think I like the In Advance Programming Problem as a general concept (low-pass filter, and weeding out people who aren't really interested in the job), but I'd probably mix it with some of the stuff from the Programming Death Race: fixed time limits and automated testing. I just wouldn't make it as conceptually tough. That's better spent in person working through someone's thought processes.
But for the in-person work, I would definitely choose the Pair Programming Problem over the Pencil-and-Paper one any day. It serves the same purpose, but it also ensures that you're seeing how someone actually works. We don't code on paper anymore, we shouldn't assume that tells us whether someone's a competent programmer anymore.
If you want to test someone's thought processes, run through a conceptual problem. That's an extremely useful thing. But if you're going to ask for code, make sure that you either specify that you're looking for pseudocode, or that you don't care about the details and just want to see the rough order code. Otherwise you're at risk of taking a very valuable test (can the candidate think about problems in a reasonable way) and turning it into a trivia test.
And if you're not doing a real-world programming test, you should. Too many candidates have gotten good at the depth-of-a-tree or nth-Fibonacci-number level of problem without having any real concept of proper software construction as a discipline. Does your interviewing methodology go beyond that and try to determine whether the candidate is someone you'd want to share code with? If not, figure out how you would. Pencil-and-paper coding doesn't.
CodeSOD: Counting it All
6 hours ago