In teaching Scrum and Agile one consistent discussion topic is around the question "Can developers test?". Sometimes it is narrowed to "Can developers test their own code?", but some variant of this issue is almost always raised, often in the context of what a team should do if it doesn't have "enough" QA Engineers.
There is always a group that claims the answer to this question is "NO", their two main arguments being:
- "Developers and testers have different mindsets."
- "Developers are too close to their own code to test it effectively. It is like trying to spell check your own writing."
I began working in the industry as a developer a long time ago, before QA was a common concept in commercial software development. I was responsible for writing my code, testing it, and in some environments, moving it live. I laugh when I try to imagine a scenario where my code introduced a problem into Production and, when asked about it, I told my manager that he shouldn't expect any better because I didn't have the right mindset for testing. At that point in the industry not having the right mindset for testing meant that you didn't have the right mindset for development.
I don't find the second argument any more compelling. I check my own spelling all the time. And there is indeed a "mindset" at work, because I find the key is to not read what I wrote, which will lull me into reading what I think I wrote, but to look at each word individually. I see no reason a developer can't do the same thing - looking at the atomic components of the program rather than skimming over it at the highest, functional level. In fact I believe that, in many situations, the developer is better positioned to test at least some of the code. The programmer knows which parts of the software implement algorithms that they may not have 100% confidence in; which parts of the code need more careful boundary testing then others; etc.
Despite making this argument as eloquently as possible there is, at a minimum, skepticism, and often outright defiance in favor of the status quo thinking. Why is this, especially now when the lines between development and testing have become so blurred. No one would argue that a developer shouldn't write their own unit tests, which implies some sort of "testing mindset". And likewise many QA activities, specifically around creating automated test suites with tools like Selenium, are more coding than classic testing. The other initiative that would seem to undermine the "developers can't test" position is test driven development. If the developer can't test their code after it is written how can they possibly define the tests that the code needs to pass before writing the code?
My belief is that the attitude that developers can't test is a result of the campaign that has been waged over the better part of the last 3 decades to separate development and testing activities. There are certainly points in the development cycle when that separation serves a very valuable purpose. But there is also an element of falseness to the separation when it is applied universally. Do we really care that a piece of software is "code complete"? Is that an interesting milestone, or is it a holdover of the misguided attempt to place a manufacturing paradigm on software development? I've even seen Scrum teams that use "Code Complete" and "QA Complete" as statuses for work done within a sprint, operating a kind of "mini-waterfall" process within an Agile wrapper.
Having lived through this process, here is what I've observed over the last 4 decades as separate testing operations have come into being. And before describing this I should say that this is not, in any way, meant to be "Anti-QA". I think QA, as it has developed and matured, is an important element in a complete and mature software development environment and will describe my view of that environment in a follow-up article.
- Testing operations were initially created at least partly for financial reasons. In the early days, and still today to a certain degree, testers make less than developers. That meant that the more expensive resources could be allowed to focus on what was considered the more valuable and skilled work, while the lower paid resources would handle the other work, namely testing. I had the personal experience of having my manager tell me directly, when I told him I was testing my code before turning it over, that "We have people to do that! You're supposed to be coding!"
- As testers were organized into QA Organizations there was an inevitable conflict between QA and development. Developers didn't like being told that their code didn't work, especially by people they perceived to be lower down the pecking order. Bug review meetings in this era often had more in common with gang warfare than any kind of collaborative effort to make the software better
- To alleviate this tension the "Egoless Programming" philosophy was promoted. Originally defined and promoted by one of my heroes, Jerry Weinberg, Egoless Programming made a number of excellent points. Unfortunately like many good ideas people focused on only a portion of it and ignored the majority of the points that it makes. The "Ten Commandments" of Egoless Programming are:
- Understand and accept that you will make mistakes.
- You are not your code.
- No matter how much "karate" you know, someone else will always know more.
- Don't rewrite code without consultation.
- Treat people who know less than you with respect, deference, and patience.The only constant in the world is change.
- The only true authority stems from knowledge, not from position.
- Fight for what you believe, but gracefully accept defeat.
- Don't be "the guy in the room."
- Critique code instead of people—be kind to the coder, not to the code.
- So Egoless Programming does state that software development is a human endeavor and that humans, being error-prone, cannot help but have bugs in their code. But it was taken too far so that it became tacit approval for developers to not try to produce bug-free code and has been consistently used as a counter-argument when that goal is proposed. Personally I never subscribed to this philosophy. If you try to write bug free code then, hopefully, you'll succeed in writing mostly bug free code which would be a vast improvement for many teams. I worked with a gentleman who, at one point, dared anyone, tester or developer, to find a bug in his code. The reward was a free breakfast. A number of defects were found and several breakfasts were bought, but I always admired the pride of authorship that lay behind this challenge.
- There is certainly truth to the fact that humans are error prone, but I fear that the mis-application of the Egoless Programming philosophy has led to generations of developers who don't feel that they are particularly responsible for producing quality code. Instead they feel that they are responsible for producing something that more or less works and then fixing the defects that QA reports (I refer to this as "QA-ready code"). There is no evidence that this approach produces quality software. In fact a valid argument could be made that it does just the opposite! Within an Agile environment the question/challenge I use not just with the teams I work with but with the individual developers is "Are you producing QA-ready code, or Production-ready code?". If it is the former then the team is not living up to the basic goals and tenents of Agile.
So my answer to the question posed at the beginning of this article is "Absolutely, developers can test!". In fact I think that if a developer can't test they are not much of a developer. If you're not capable of thinking through the details of using the software that you are writing, how are you ever going to do a good job of writing it.
As always, comments and questions are welcome.