During Technical interviews involving coding question what criteria do people use to evaluate code. Assuming there are multiple ways to code the same problem, what metrics can be used to evaluate and compare the answers objectively.
The Interview is typically 1 hour long
Some of the things I use are
- Brevity and Simplicity
- Clean Design / API
- Scale, Perf, Concurrancy
What do other people use. Anything I am missing in this list ?
Since most programmers typically use an IDE and also API documentation to accomplish a task, I dislike interviews that focus on the syntax or the actual method name (unless it is something that is common knowledge).
The focus should be on seeing how the interviewee approaches and solves the problem. Ideally they should do it in such a way that demonstrates that they are knowledgeable about the principles that you are testing them on.
So I would focus more on the spirit of the code they’re writing, rather than the letter 🙂 (Just my opinion – I’m open to hearing counterarguments and counter-opinions).
To answer your question, it really depends on what you mean by your criteria:
Correctness – What does this mean? Syntax? I don’t think syntax should be too much of an issue (unless it appears terrible wrong). Perhaps you can limit correctness to the correctness of the approach. For example, if they use an algorithm or approach that is clearly wrong.
Brevity and Simplicity – In what context? The approach in general? Sure, you can check to see that they are not overly verbose (i.e., how elegant the solution is).
Clean Design/API – I don’t know how well you can test this. It’s too difficult to come up with a very clean design and API in just 1 hour. What you can look for is whether they have the right idea or are moving in the right direction.
Testability – This is too broad and you shouldn’t be too strict about it. What you can do is ask them how they would test their solution after they have designed it. Allow them to make changes and don’t hold them to their design. see how they approach the task of testing it or designing tests for it. When you write code and design something it’s not a one-time deal. You’re constantly refining it.
Scalability, Performance, and Concurrency – Don’t specify this when you talk about the design and like I said above, don’t penalize them if they don’t make a solution that doesn’t perform well or scale well at first glance. Instead, if you see that it is not scalable or doesn’t support concurrency well (or even if it does) ask them how their solution will perform is scalability and concurrency is a concern.
Your aim is to see how they think and how they approach a solution. It’s not how well they’ve memorized an API or memorized definitions. Also, if you simply went by your criteria alone, it would be very hard to find a candidate. Like I mentioned initially, programmers don’t work in isolation only with the stuff they have in their head. They rely on multiple sources to accomplish what they are trying to do.
Sorry to be crude, but your criteria are far too anal. You’re dealing with human beings, which are (hopefully) going to work in a team; not optimising compilers. So Joe writes correct code, that’s brief, clean, testable, scales well, etc. But Joe is a pedantic, eternal whiner, who showers ever 3 months and destroys ever team dinner you organise by ranting incessantly about the derived methods.
See what I mean?