Recovered from the Wayback Machine.
Here’s a perfect example of how the computer field is broken:
In a post at Coding Horror, based on earlier posts at Imran on Tech and Raganwald, the author parrots what the others state, that programmers can’t program. With lots of exclamation points.
Why make such a breathtakingly grandiose claim? Because of what happens in interviews. It would seem that the originator of this newest fooflah created a series a tests given during the interview process and found:
After a fair bit of trial and error I’ve discovered that people who struggle to code don’t just struggle on big problems, or even smallish problems (i.e. write a implementation of a linked list). They struggle with tiny problems.
So I set out to develop questions that can identify this kind of developer and came up with a class of questions I call “FizzBuzz Questions” named after a game children often play (or are made to play) in schools in the UK. An example of a Fizz-Buzz question is the following:
Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”.
Most good programmers should be able to write out on paper a program which does this in a under a couple of minutes. Want to know something scary? The majority of comp sci graduates can’t. I’ve also seen self-proclaimed senior programmers take more than 10-15 minutes to write a solution.
Jeff Atwood of Coding Horror also goes on to quote others who run into the same problems: interviewees can’t seem to do even the simplest coding tasks during interviews. These gentlemen completely ignore the environment and focus on the grossest of generalities:
Programmers can’t program.
Here’s a clue for you: I don’t do well in programming tasks during interviews, and I’ve love someone to come into my comments and tell me I can’t program based on this event. No, I’ve only faked it while working for Nike, Intel, Boeing, John Hancock, Lawrence Livermore, and through 14 or so books–not to mention 6 years of online tech weblogging.
In fact, you’ll find a lot of people who don’t necessarily do well when it comes to programming tasks or other complex testing during job interviews. Why? Because the part of your brain that manages complex problem solving tasks is the first that’s more or less scrambled in high stress situations. The more stress, the more scrambled. The more stressed we are, the more our natural defensive mechanisms take over, and the less energy focused into higher cognitive processes.
Why do you think that NASA, the military, and other organizations training people for high risks jobs spend so much time in simulation? Because they want the tasks to be so ingrained that in a stress situation, the people’s responses are almost automatic.
If you add the potential for embarrassment on to the strong desire to do well, the need to get the job, toss in a panel of arrogant twats sitting around a table looking directly at you while you do your little tests, and you have the makings of an environment that almost guarantees the elimination of many fine candidates.
Who does well in these kinds of testing situations? Good testers, the supremely self-confident and equally, typically arrogant, and the people who don’t care: none of which is necessarily the best candidate.
The whole purpose of tests such as these are not to determine if a person has programming capability–how can one stupid test determine this? What these tests do, though, is add to the self-consequence of the person doing the interview.
“I can do this, but all these people can’t. Therefore, I’m so much better.”
It’s also a lazy interview technique, which shows that HR associated with the company doesn’t give a crap about the IT department.
Some justify such tests with, “We need people who can do well in stress situations.” Bilge water.
The stress one goes through when one is an outsider faced with a bank of insiders, is completely different than the stress an individual goes through when they’re part of a team trying to fix a problem or roll out a product. Comparing the two is ludicrous, and nothing more than a demonstration of completely two-dimensional thinking: one form of stress is completely the same as another. My god, no wonder we’ve had few tech innovations lately if this is demonstrative of leadership in IT.
Having candidates bring in samples of code and having the interviewer and interview team review such, and question why decisions were or weren’t made is an excellent way of getting insight into the person’s problem solving skills, without the trained dog and pony show. Asking a person what approach they would use in a situation is superior to doing a random memory test on keywords. Providing applications and having the person provide their own critique is an amazingly effective way of getting insight, not only into their problem solving skills, but also into their personality. If they point out errors but do so in a thoughtful manner, it’s a heck better than doing so in as scathing a manner as possible.
Looking at past applications or effort is another effective approach. New programmers with no job experience can provide pointers to open source applications; experienced people who have worked in an NDA situation can provide pointers for discussions and work online: heck, Google the person’s name–that will tell the interviewer much more about the person than a silly programming test.
That primitive techniques such as the abysmally stupid “FizzBuzz” approach are used shows that companies are still missing out on good people because they have idiots doing most of the interviews.
And making the leap between how people do on interviews into such grand claims that programmers can’t program demonstrates that idiocy travels up the food chain.
You know what’s especially humorous? All the people who solve the test questions in the comments. What possible reason would a person have to do such a thing? It’s completely irrelevant to the environment in which these so-called tests are given. This no more shows that these people can program, then it shows that the other people can’t.
The lack of logic in this whole thread is amazing.
What’s less funny, though, is the slavish adherence to Joel Spolky’s elitist crap. Joel runs a smallish computer company with limited products: what the hell makes him the definitive answer on these topics? Perhaps the people should spend less time making pronouncements, and more time developing independent thinking skills.
Many of the comments in the Coding Horror post do mention these concerns, and provide other effective approaches to interview. If the people who create these tests will actually read these responses, some good will have come from the discussion.
I have found, though, that people who write these kinds of tests aren’t always willing to considering other options. The other approaches just aren’t ‘clever’.