Otto Weininger quote

"All genius is a conquering of chaos and mystery." - Otto Weininger

Thursday, January 8, 2015

Can Developers Test?





In teaching Scrum and Agile one consistent discussion topic is around the question "Can developers test?". Sometimes it is narrowed to "Can developers test their own code?", but some variant of this issue is almost always raised, often in the context of what a team should do if it doesn't have "enough" QA Engineers.

There is always a group that claims the answer to this question is "NO", their two main arguments being:
  • "Developers and testers have different mindsets."

  • "Developers are too close to their own code to test it effectively. It is like trying to spell check your own writing."

I began working in the industry as a developer a long time ago, before QA was a common concept in commercial software development. I was responsible for writing my code, testing it, and in some environments, moving it live. I laugh when I try to imagine a scenario where my code introduced a problem into Production and, when asked about it, I told my manager that he shouldn't expect any better because I didn't have the right mindset for testing. At that point in the industry not having the right mindset for testing meant that you didn't have the right mindset for development.

I don't find the second argument any more compelling. I check my own spelling all the time. And there is indeed a "mindset" at work, because I find the key is to not read what I wrote, which will lull me into reading what I think I wrote, but to look at each word individually. I see no reason a developer can't do the same thing - looking at the atomic components of the program rather than skimming over it at the highest, functional level. In fact I believe that, in many situations, the developer is better positioned to test at least some of the code. The programmer knows which parts of the software implement algorithms that they may not have 100% confidence in; which parts of the code need more careful boundary testing then others; etc.

Despite making this argument as eloquently as possible there is, at a minimum, skepticism, and often outright defiance in favor of the status quo thinking. Why is this, especially now when the lines between development and testing have become so blurred. No one would argue that a developer shouldn't write their own unit tests, which implies some sort of "testing mindset". And likewise many QA activities, specifically around creating automated test suites with tools like Selenium, are more coding than classic testing. The other initiative that would seem to undermine the "developers can't test" position is test driven development. If the developer can't test their code after it is written how can they possibly define the tests that the code needs to pass before writing the code?

My belief is that the attitude that developers can't test is a result of the campaign that has been waged over the better part of the last 3 decades to separate development and testing activities. There are certainly points in the development cycle when that separation serves a very valuable purpose. But there is also an element of falseness to the separation when it is applied universally. Do we really care that a piece of software is "code complete"? Is that an interesting milestone, or is it a holdover of the misguided attempt to place a manufacturing paradigm on software development? I've even seen Scrum teams that use "Code Complete" and "QA Complete" as statuses for work done within a sprint, operating a kind of "mini-waterfall" process within an Agile wrapper.

Having lived through this process, here is what I've observed over the last 4 decades as separate testing operations have come into being. And before describing this I should say that this is not, in any way, meant to be "Anti-QA". I think QA, as it has developed and matured, is an important element in a complete and mature software development environment and will describe my view of that environment in a follow-up article.
  1. Testing operations were initially created at least partly for financial reasons. In the early days, and still today to a certain degree, testers make less than developers. That meant that the more expensive resources could be allowed to focus on what was considered the more valuable and skilled work, while the lower paid resources would handle the other work, namely testing. I had the personal experience of having my manager tell me directly, when I told him I was testing my code before turning it over, that "We have people to do that! You're supposed to be coding!"

  2. As testers were organized into QA Organizations there was an inevitable conflict between QA and development. Developers didn't like being told that their code didn't work, especially by people they perceived to be lower down the pecking order. Bug review meetings in this era often had more in common with gang warfare than any kind of collaborative effort to make the software better

  3. To alleviate this tension the "Egoless Programming" philosophy was promoted. Originally defined and promoted by one of my heroes, Jerry Weinberg, Egoless Programming made a number of excellent points.  Unfortunately like many good ideas people focused on only a portion of it and ignored the majority of the points that it makes.  The "Ten Commandments" of Egoless Programming are:

    1. Understand and accept that you will make mistakes. 
    2. You are not your code. 
    3. No matter how much "karate" you know, someone else will always know more. 
    4. Don't rewrite code without consultation. 
    5. Treat people who know less than you with respect, deference, and patience.The only constant in the world is change. 
    6. The only true authority stems from knowledge, not from position. 
    7. Fight for what you believe, but gracefully accept defeat. 
    8. Don't be "the guy in the room."
    9. Critique code instead of people—be kind to the coder, not to the code.

  4.  So Egoless Programming does state that software development is a human endeavor and that humans, being error-prone, cannot help but have bugs in their code.  But it was taken too far so that it became tacit approval for developers to not try to produce bug-free code and has been consistently used as a counter-argument when that goal is proposed.  Personally I never subscribed to this philosophy. If you try to write bug free code then, hopefully, you'll succeed in writing mostly bug free code which would be a vast improvement for many teams.  I worked with a gentleman who, at one point, dared anyone, tester or developer, to find a bug in his code. The reward was a free breakfast. A number of defects were found and several breakfasts were bought, but I always admired the pride of authorship that lay behind this challenge.

  5. There is certainly truth to the fact that humans are error prone, but I fear that the mis-application of the Egoless Programming philosophy has led to generations of developers who don't feel that they are particularly responsible for producing quality code. Instead they feel that they are responsible for producing something that more or less works and then fixing the defects that QA reports (I refer to this as "QA-ready code"). There is no evidence that this approach produces quality software. In fact a valid argument could be made that it does just the opposite! Within an Agile environment the question/challenge I use not just with the teams I work with but with the individual developers is "Are you producing QA-ready code, or Production-ready code?". If it is the former then the team is not living up to the basic goals and tenents of Agile.

So my answer to the question posed at the beginning of this article is "Absolutely, developers can test!". In fact I think that if a developer can't test they are not much of a developer. If you're not capable of thinking through the details of using the software that you are writing, how are you ever going to do a good job of writing it.

As always, comments and questions are welcome.




Monday, January 6, 2014

Agile article from CIO

A good friend, Mike Dwyer (@MikeD999) sent me this link (http://www.cio.com/article/734338/Why_Agile_Isn_t_Working_Bringing_Common_Sense_to_Agile_Principles?page=1&taxonomyId=3040) and asked me for my thoughts. It seemed to me that this article was typical of the "anti-Agile" material that shows up with alarming frequency in some of the industry's journals. After discussing it he encouraged me to blog my comments and since this did seem like a good representative article to rebut I agreed. So, with thanks to prodding from Mike and apologies for taking so long to get this out, here they are.

First let me summarize the article without editorial content. I will apologize in advance for the number of lengthy quotes but for those of you who haven't read the article I wanted to give you a feel for the points it makes, and to hopefully illustrate that I'm not exaggerating what the author is claiming. Here is the summarization:

  1. "Agile has not only failed like other fad methodologies ... but ... is making things worse in IT"

  2. The author "recasts" the agile principle of early and continuous delivery of valuable software as "delivery over quality". His conclusion is that "focusing on continuous delivery has the effect of creating an unmanageable defect backlog while developers work to put something in front of the customer"

  3. He goes on to elaborate further, using an example where "a huge defect backlog had developed over the previous 18 months" which needed "groups of iterations devoted solely to ... the backlog". His conclusion in this case is to go further on his "delivery over quality" theme and state that "agile promotes the practice of ignoring defects".

  4. His next point is restating the agile principle of responding to change over following a plan as "development over planning". He elaborates that agile "does not distinguish between big and small changes" and that agile does not account for the fact that "every change has a cost". His justification? People often change really big things late in the game using the rationale that ... it's an agile project..."

  5. His next criticism is his claim that agile promotes "collaboration over management". The support here is the statement that "In too many agile projects the Scrum Master is little more than a hapless cowboy waving his arms in despair while his charges go in all different directions at once. You cannot replace accountable and responsible project management with some immature utopian myth of self-organization."

  6. The author describes something he calls "agile thinking" which is "the ability to take the input of all the variable elements of the project—budget, time, design patterns, reusability, customer needs, corporate needs, precedents, standards, technology innovations and limitations—and come up with a pragmatic approach that solves the problem at hand in such a way that the product is delivered properly." His conclusion is that "Agile as a methodology cannot deliver agile thinking, however, and inevitably ends up preventing it."

  7. The final point I'll call out is the author's claim that "One of the hardest things for many developers is pragmatism." He cites a project based on "a purist design" that was "a marvel of abstration" that failed.

WOW! When I see articles like this I would like to dismiss the whole thing as massively un-informed, but let's explore the details, just for fun. In writing this I am using Agile as I practice, teach and coach it. I also want to state that I'm not an Agile purist, apologist, or anything of the sort. But I have been using Agile successfully for a number of years and one of the things that struck me, as I'm sure it struck many of you, is that I don't recognize much Agile, at least "good Agile", in what the author describes. Unfortunately the industry contains far too many examples of teams claiming to be agile who are nothing of the sort (more on this later). But this doesn't mean that Agile doesn't work. It does reinforce the fact that more teams should seek help in getting professional training and/or coaching when adopting agile.

The author, like many people who comment on the failings of agile, clearly hasn’t worked on an actual agile team. If he had he would have experienced many if not all of the practices I describe in the remainder of this post.

Let's look at the points in detail:
  1. The author claims that agile stresses “delivery over quality”. This couldn't be farther from the truth. A couple of points from an “actual agile” implementation easily rebut this:

    1. With every team I work with part of the Done criteria are quality goals. Typically:

      1. No known defects against the story

      2. Unit tests written so there is xx% (80% is a typical, minimum target) coverage.

      3. All test plans successfully executed.

      4. No regression or new functionality defects identified

      If these criteria aren't met then the work is not done and is not accepted. If a team is not doing something like this then it is hard to believe they are delivering production ready code, which is a core tenet of any agile implementation!
    1. If a team finds itself creating a defect backlog (i.e. technical debt) then they deal with it in the sprint retrospective (why is it happening?) and the next sprint planning meeting (what do we do to address it?). With the root cause(s) identified and addressed and a plan to deal with the backlog of technical debt in place the team moves forward on a better footing and verifies their results in the next sprint retrospective. This is happening every sprint (1-4 weeks) and, if done well, means that the team is continually improving both their own performance as well as the software that they are creating. Compare this to the typical waterfall project where opportunities for review, retrospection and improvement occur months (and months) apart, and provide only limited opportunity to improve the results of the current project.

    2. To stress this point, and deal with the author's "huge defect backlog" created over 18 months - there is no way this should happen in an actual agile environment. Looking at the author's specific scenario:

      • Where was the "inspect and adapt" principle as shown in Figure 1? Were there retrospectives? If so, how did they fail to address this issue for 18 months?


      • Figure 1. Inspect & Adapt

      • How were the Done Criteria defined? It would seem that they either weren't defined well enough from a quality standpoint, or they weren't enforced effectively. But, again, how was this allowed to continue for 18 months?

      I've also seen teams that claimed they were doing Agile create a mess just like the author describes through a combination of mis-management, horrid Agile execution, and the presence of far to many "Wallys". But that doesn't mean that Agile has "failed like other fad methodologies". Those teams fail no matter what methodology they (mis-)use!

    3. If you are doing agile correctly it should actually be “quality over delivery” because you're delivering production quality code at the end of each sprint and not relying on an interminable find/fix cycle at the end of the project.

  1. The author states that agile stresses “development over planning”. This can be true, but not in the way the author states. Personally I strenuously object to the agile idea that architecture evolves. This is one of the ideas that people like the author cite when pointing to a lack of planning. The reason for my objection is the following: if "architecture" is defined as "the design/definition of the things in the system that are hard to change" then expecting it to evolve in a reasonable and efficient way seems unrealistic. In setting up agile projects I will typically use the Sprint 0 concept, which is itself a bit controversial among some agile practitioners, to insure that the things that need to be thoroughly thought through before getting into development are given the attention they deserve.

  2. In making the "development over planning" point the author states that agile "does not distinguish between big and small changes" and furthermore that "Every change has a cost, but agile does not account for this". This seems to show a spectacular lack of understanding of the principles of Agile.:

    • In agile all changes are small, by definition, since stories need to easily fit into a sprint. The key is effectively figuring out how large changes can be broken down into a series of smaller ones.

    • The author also uses "big" to mean risky. Contrary to the author's point agile teams should not "change really big things late in the game". They should be doing the opposite. Most teams use an analysis of "Risk" vs. "Value" as shown in Figure 2 to insure that they are attacking the upper, right quadrant (High Risk/High Value) first. Just the opposite from the author's point.

      Figure 2 - Risk/Value Matrix

  3. I’ve been involved in building very sophisticated applications (highly configurable; multi-tenant; highly scalable; etc.) with agile teams and we’ve always been able to do this. Writing appropriately sized stories is an area where a lot of product owners struggle initially but all of the teams I've worked with have managed to get there with a little work and practice. High risk/value stories may emerge late in a release cycle and have to be tackled, but that should be the exception, not standard operating procedure. The author clearly doesn't understand the type of planning that does occur in an effective Agile environment.

  4. Then the author doubles down on the previous, spurious, quality argument ("in for a penny, in for a pound" so to speak). But one thing I found interesting about his position is the “Ode to Traditional Practices (Waterfall?)” that he engages in. Really? The original article that defined the waterfall method (http://www.cs.umd.edu/class/spring2003/cmsc838p/Process/waterfall.pdf) actually says that the method (as it has come to be practiced - see Figure 3) doesn’t work for any kind of complex project.


    Figure 3 - Typical Waterfall Process

    One could make the argument that the industry has been heading toward Agile for the last 40 years. Royce's original article at least hints at prototyping and iterative development, and it is those initiatives, along with the adoption of incremental development that helped created a path to Agile practices.

  5. “Collaboration over management” – the key point that the author makes here is Scrum Master as hapless cowboy. He sees the Scrum Master as a Project Manager from whom all authority has been stripped, which utterly misses what the Scrum Master is and does. There is no reason for the Scrum Master to be "waiving his arms in despair" because he/she has no "charges {running} in all different directions at once". The Scrum Master does not have "charges" but works with the team to facilitate, to remove impediments, and to make sure that the team and the individuals within the team are being faithful to their commitments. If these things become a problem the Scrum Master takes the problem to the team and they collaborate on a solution. The Scrum Master role can be challenging because with a team that is new to Agile they may have to use a little more of their "command & control" repertoire then is considered "Agile", as discussed below. The key to a successful Scrum Master is realizing this and working to mature the team so they can play more of the "servant-leader" rather than project manager role.
    To do this well requires a team capable of growing into their self-managed / self-organized role. There are plenty of Wally’s in the industry, and a team of them couldn’t order lunch without supervision, but they are just as poisonous to a “traditional” project as to an agile project. The argument could be effectively made that agile, with short delivery cycles, frequent retrospectives, and team accountability, is more effective at rooting out the Wally's than monolithic waterfall projects. There is a nuance here that the author, not surprisingly, misses. That is that adopting agile can be challenging for a new team for a number of reasons:

    1. Many new teams are building software on top of legacy code that already has quality problems and typically doesn’t have a set of automated tests to provide the safety net that agile relies on. There are ways to address this, but the situation is almost always going to be improved incrementally and there will be challenges while and until it gets better.

    2. New teams are used to command and control environments. Transitioning to working in a self-directed fashion takes time and some getting used to. In working with new teams I describe 3 stages that they will go through:

      1. "Tell me what to do" - still in the old command and control mindset.

      2. "I'll tell you what I'm doing" - taking initiative, but still not in the team mindset.

      3. "We'll tell you what we'll do" - this is the team mindset that is needed for high performance agile.

      This is probably one of the biggest areas where teams benefit from agile coaching. But it doesn’t mean that Agile doesn’t work, just that it takes some time and effort to get there.

  6. The point around "agile thinking", the importance of prioritization, and the failure of Agile to address these is particularly surprising. The author never mentions the Product Owner role which is where all of these things come together. It makes me wonder if the projects that the author worked on actually had this role defined. It is fundamental to Agile that the Product Owner factors in all of the aspects described by the author to establish business value and then prioritizes the most important things, from a business value standpoint, to be addressed first.  Agile Release Planning takes all of this and creates a projected, and tentative, roadmap that puts the highest priority items in a schedule context beyond the current sprint.

  7. The basic premise seems to be summed up in the statement “ One of the hardest things for many developers is pragmatism. Rather than think practically, they inevitably fall into abstract approaches to problems”. One of the problems with generalizations is they are always wrong (irony is intentional). This thinking is an example of the “programmers just want to build cool stuff and need adult supervision” camp. The flip side of this is the “managers are useless idiots and if you just keep them away from the developers everything will be perfect.” camp. Both are equally false in general. There are un-pragmatic developers and pointy-haired bosses, to be sure, but in almost 40 years in the software industry I've found that people are a lot more complex than that and can be trusted more than this attitude implies. To use this as an argument against collaboration seems naive and overly simplistic. I would counter by making the point that Agile focuses the team on delivering Business Value, as defined and prioritized by the Product Owner, each sprint. How much more pragmatic can you get than that. The fact that the author worked on a project where an un-pragmatic, "purist design" was a problem isn't an Agile failure. If anything this seems to show why the YAGNI ("You Aren't Going to Need It") principle is so valuable. If that Agile principle had been applied then that design would probably have not been allowed, which means that the project might have been saved if it had actually followed good Agile practice.

  8. The author hasn’t done any interesting research, but is publishing an opinion piece. He justifies his position with the statement "I've been involved in a number of agile projects”. But based on the total lack of insight into how Agile really works it seems obvious that although the projects the author was on said they were Agile they clearly weren't. The most damning thing the author says here is that he was "overall responsible manager" on at least one of the projects. Given the author's lack of Agile knowledge it is hard to believe that particular project was doing anything that most knowledgeable people would recognize as Agile. One of the biggest problems we have with Agile, at least from a PR standpoint, is that there are so many teams who say they are doing Agile and they just aren’t!  An alarmingly large number of companies I’ve seen over the past couple of years say they are doing Agile but I believe that in 1-4 questions you can determine if this is true. The questions are:

    • "Describe your daily standup." Typical response: "We don't have one."

    • "How do you manage the product backlog?" Typical response: "We don't have a backlog."

    • "How long is your sprint?" Typical response "It depends..." That's enough.

    • "Describe your Done criteria." Typical response: blank stare.

    There are obviously more questions that can be asked, but I’ve found that these four seem to be indicative of how well a team is doing in terms of following an agile process.

  9. So, my conclusion is that this article is basically a massive pile of misinformation that somehow CIO decided to publish. Those of us who have been practicing Agile for a while know that it can and does work. We also know that it is not easy. It requires us to do things that are uncomfortable at times. To communicate openly and honestly. To be transparent. To collaborate effectively. To be accountable. Teams take time getting there, but once there they can be extremely productive. Despite what the author's experience was on his several, almost certainly not well run, Agile projects.

    As always, thoughts and comments are welcome.

Tuesday, December 31, 2013

Cone of Market Coverage

Delivering a product via the Software as a Service (SaaS) mechanism has a number of advantages, both business and technical. But in order to successfully realize those advantages a company needs to follow a disciplined approach to how they grow, and in particular to the kind of customers they take on. In working with several SaaS clients over the past year I've found the idea of the Cone of Market Coverage, described below, to be useful. Over time the functionality of an applications should increase, as shown in Figure 1.




Figure 1: Cone of Market Coverage


This concept ties into two critical success factors on the technical side of a SaaS business: scalability and quality. Scalability of the application, of Data Center operations and of the business in general allows the company to market and add new clients aggressively and efficiently. Clients trust the company to run the application for them and assume a level of quality in return for that. They do not expect to encounter unexpected down time or random errors and if these do start to occur may push for their own isolated environment, or opt for an on-premise or hosted build.

One of the key ways to achieve scalability and quality is to maintain a single code base. This makes it easier for the company to operate a multi-tenant environment, and leverages the development effort by focusing all the work on that single code line. This makes maintaining high quality much easier without the need to consider either multiple code lines and/or customized code.  And equally importantly it creates an environment where the company can most effectively focus their resources on expanding the product, offering market differentiating features and functions.

As the company penetrates the market the degree to which it can stay within its Cone of Market Coverage, shown in Figure 2 below, will help determine how easy it will be to keep to the single code line approach. These clients use existing functionality and don't require client-specific changes to the code line.




Figure 2: Target Clients


Sometimes clients that are just outside the cone will be encountered, referred to as Near Outliers. These can often be handled with reasonable changes to the product roadmap, as shown in Figure 3. In fact in some cases this kind of market feedback takes the product in a good direction that may not have occurred otherwise. This, of course, depends on the changes being made within a product context, rather than as a customization.




Figure 3: Near Outlier


The real challenge comes when a potential client that lies well outside the cone is encountered, referred to as a Distant Outlier (see Figure 4).




Figure 4: Distant Outlier


In general it is very hard or impossible to meet the needs of a client like this within a standard product context. This generally leads to one of the following approaches, all of which are difficult to deal with in a SaaS environment:

  • Pseudo product code where the changes required are controlled by configuration parameters controlling only single client changes. As more of these creep into the code the challenge of managing all of these parameters, and the code they control, increases exponentially.

  • Customization of the single code line which makes it very difficult to maintain the integrity of that single code base. It becomes more difficult to make changes since the client-specific code needs to be considered. As the number of customizations grows this problem increases. In addition the testing of releases grows progressively more challenging as the level of effort needed to insure that the custom code works from release to release increases.

  • Branching the code into multiple code lines. This leads to redundant work in maintaining those multiple code lines which leads to a lack of efficiency. As the number of code lines increases, the inefficiency grows.

This leads to problems for a SaaS vendor which is why understanding and attempting to stay within the Cone of Market Coverage is an important concept for any SaaS company. There will be reasons a company may feel pushed to go outside its particular Cone, and there may be times that it is a necessity, but the degree to which these decisions are made plan-fully, keeping the Cone in mind, will allow the company to control the impact of those decisions, helping insure their success.

Would love to hear your thoughts on this concept and how it maps to your own experiences in the SaaS space.

Friday, January 18, 2013

Agile Coaching, Part 1

I had the opportunity to talk to the New York City Scrum User Group last night on this topic (nycscrumusergroup/) and wanted to summarize some of the key points of that talk. The industry press is full of statistics about the increasing rate of agile adoption and there are many teams accomplishing excellent results with agile. But there are also a lot of teams that are struggling.

Agile is deceptively complex. In the classic waterfall project we're all familiar with:

  • Communication is between silos at stage boundaries

  • The process is well defined

  • Delivery is some time in the future, mostly



In contrast, with agile:

  • Communication is constant

  • The process is not well-defined

  • Delivery is in the next few weeks, at most


All of this contributes to make agile more complex, while at the same time it appears deceptively simple. This, in turn, leads teams to employ what's becoming known as the SCRUM-BUTT approach, "enhancing" their agile process with waterfall or other adornments to address the perceived shortcomings within the core agile processes. Now some teams have made useful additions to their process, but in too many cases the problems could have been solved by applying the basic agile framework better.

When I coach teams I start by establishing a basic understanding of why agile works:

Agile works because we don't need to pre-define everything. We don't need to pre-define everything because:

  • the team and the Product Owner have an open dialog to insure the sprint requirements are understood

  • it is easy to change the software

  • we keep entropy from accumulating in the code

  • we have a safety net of automated tests that give us confidence that we haven't broken existing functionality


Agile also works because we deliver production-ready software at the end of each sprint. We do this because:

  • we have that safety net of automated tests

  • we have a clear definition of "Done“

  • we insure that we meet our definition of done on each story


If the team doesn't embrace these core principles and hold themselves and each other accountable for meeting them every day the team will probably struggle. For example, if the automated tests don't do an adequate job of covering the system (and this is very often a problem for teams just starting out, putting them in a hole they then struggle to ever get out of) changes to the code may introduce defects that slip through into QA or even production. The team then ends up constantly interrupted to fix priority defects, which means they then struggle to deliver their committed stories each sprint. This in turn puts pressure to do things faster, which leads to cutting corners, leading to even more technical debt until someone concludes that "Agile doesn't work!". That's the wrong conclusion. The root of the problem is that the team isn't executing the process correctly.

Once the team grasps these core principles we can move onto actually defining and adopting the process. This involves establishing the "Philosophy of Agile", which build closely on the core principles, then moving on to the basic fundamentals of the Scrum process. I'll talk about that in part 2.

The last thing I stress to the team, going back to the SCRUM-BUTT phenemona:


  • We will get better by doing it and delivering value along the way

  • And, if you’re tempted to change the process...


    • STOP!

    • Are you at Ri? (If you're not familiar with the ShuHaRi concept you can read about it here (martinfowler.com ShuHaRi)) And if the team answers yes to this, the follow on question is "Really???" since teams frequently overestimate their progress on this scale

    • If you are not at RI you shouldn't be changing the process, so go back to first principles

    • And if you don't understand how to do this, get help




More on this in the next post. And, by the way, Happy New Year!



Sunday, December 30, 2012

Proximity to Requirements

In working with a set of teams that were consistently having problems delivering on their commitments a friend and colleague of mine described what he felt was a key problem by using the phrase "Proximity to Requirements". By this he meant keeping the chain of requirements, from their original source to the developer(s), as short as possible. His feeling was that the developers were too isolated from the end users, leading to a loss of fidelity as the requirements were translated through intermediate layers.

I think he hit the nail on the head in this situation. The teams, allegedly following an agile process, had no actual user representatives working directly with them. The client described what they wanted to a member of the account management team. From there it was handed over to a business analyst who wrote a high level design document that the customer signed off on. These were then handed off to a development manager who translated them into stories for the teams. The end result was software that didn't meet the clients needs and had to be reworked, often multiple times, before finally being accepted.

One obvious problem in the situation above is that it is very difficult to successfully implement an agile development process if the rest of the organization is still stuck in a waterfall model. The company culture has to change to become (more) agile overall. This can be a challenge when you need to have client sign offs and other contractual milestones, but there are solutions to these challenges. But that's a topic for a later post.

The concept of maintaining "Proximity to Requirements" should be something every agile team keeps in mind. Do you have a user representative as part of the team who can communicate user needs directly to the developers? If not that could be a red flag. And is that person still close to the client's needs? If they've been "embedded" with the development teams for a sufficiently long time they may have lost contact with the end users. I would recommend making sure that your user representatives (Product owners, product managers, UX experts, or whatever you use) have a chance to renew that contact. Have them involved with the sales and/or services teams from time to time to make sure their perspective is current. Get them out into the field interacting with current and prospective customers to make sure they stay current so they can be effective in their role. Whenever possible I also suggest getting the developers that kind of exposure. At a minimum can all of the developers actually run through an end-to-end demo of the product? I'm not suggesting having them do actual demos, since their time is better spent creating business value within the system, but understanding how the product works at the user level will put them into a better position to understand what they're being asked to build and will help insure that the entire team maintains that proximity to the real world requirements.

Saturday, December 15, 2012

Stop Testing and Start Thinking

On a stand-up meeting recently one of the developers said that a story was going to take longer than expected because "it's really complex and is going to require a lot of testing". Nothing wrong with finding out a story is more complex than expected and that it's going to take longer than expected. What struck me was that so often, in current software development, testing is our first answer in addressing complex software problems. Testing is an important part of software development, but I'm afraid that one of the things that we've lost in the move to agile methodologies is the concept of design. One of the principles behind the Agile Manifesto is "Continuous attention to technical excellence and good design enhances agility." but too often design is considered non-agile.

I see this most often when working with teams that are relatively new to agile, or with more mature teams that are stuck at a low level of agile maturity. They have heard about the agile principle of simplicity (maximizing the amount of work not done) and YAGNI (You Aren't Going to Need It) and they have put all their focus on coding. This can get these teams into trouble, especially in areas like scalability and performance, because some problems need to be thought through and designed before coding starts. The team should understand the overall targets that they need hit (expected perceived response time for the user; processing loads per unit time; etc.) but these are usually high level goals and are often not translated into acceptance criteria at the individual story level.

To deal with this I like to introduce 2 practices that are not universally embraced in the agile community: Sprint Zero, and Research Spikes.

Sprint Zero

Sprint Zero occurs at the start of a Release Cycle. The team identifies stories that are either too big, or where they feel there is a lack of understanding and/or technical risk and "drill in". This may mean breaking large stories into several smaller ones, or doing just enough design to give the team confidence that they understand the work. A key is to keep this sprint as short as possible. I've heard of teams spending 3-6 weeks in Sprint Zero, but my rule of thumb is that it should never be longer than a normal sprint, and preferably less. If the team is doing 2 week sprints then a 1 week sprint zero is what I shoot for. You are preparing to deliver business value, but aren't actually delivering any in this cycle which clearly indicates it should be as short as possible.

Research Spikes

In the sprint planning that starts each sprint the team will occasionally have a story that they are having difficulty breaking into tasks and/or estimating. Or there may be a story that is too large and they are having trouble breaking it into smaller stories (my rule of thumb is that a story shouldn't take longer than 1/2 of a sprint). These may be candidates for a research spike. Also, in backlog grooming, stories may be identified that the team considers high risk. These may also be good candidates. A research spike is a story where the output is not working code, but is one or more of the following:

  • A detailed task breakdown, with estimates, for the story

  • A design for the story, captured in the story itself or on the team's wiki, depending on the standards established for the team

  • An architecture for how to implement the story

  • A detailed test plan for the story


Research Spikes should be used sparingly. If you have a good backlog that is periodically groomed almost all stories should be able to go through the normal sprint planning process without issue. If you're doing research spikes more frequently than once every half dozen or so sprints you might have a problem with the backlog or with the team.

You're not directly delivering business value during Sprint Zero or while working on a Research Spike, so these need to be used sparingly. But they can be valuable tools in helping teams reach the next level of performance by moving beyond writing code and then trying to test it to completeness. Test driven development (TDD), if done well, is a reasonable compromise, because it requires the developer to think through the different aspects of the solution, but in my experience too few teams have adopted TDD and if they have they practice it with insufficient rigor.

I guess I could have titled this post "Stop Using Testing as a Substitute for Thinking", but it wasn't as catchy. Don't design everything, but when faced with a complex problem think it through. Draw a sketch of the interactions in Visio, or LucidChart, or on a piece of paper. You'll throw it away later, so don't make it pretty. It's value is to get you thinking of the different aspects of the problem, which makes it more likely that it will be complete. Then, after coding it, do your testing. And if you find problems, maybe you didn't do enough thinking.





Wednesday, December 12, 2012

Why do you hear "Schedule" when I say "Quality"?

I was sitting in on a release retrospective recently when the following situation occurred. The release had created too much technical debt and we were reviewing one of the newly introduced defects where the handling of transactions was incorrect, causing deadlocks in production. When I asked the obvious question, "How did this get through the code review?" one of the developers blurted out "We didn't have time to do all the code reviews!".

The thought that struck me was that no matter how many times quality is emphasized as the top priority, the team really thinks it's schedule. In my experience, and talking to people in the industry, this is a very common problem.

This is a SaaS company, and they depend on the software to generate revenue, so it is clear that schedule is always a consideration, but it can be very difficult to convince the team it isn't the primary consideration. And this was a good case study. The team had never asked for more time and had never identified that the code reviews weren't done. These are separate issues, so let me address them separately.

No Time

Developers (and testers, and product owners, etc.) are used to working in environments where the schedule is master. Where you get a bonus for releasing software on time, whether it's buggy or not, and chastised for releasing quality software a little late. It's hard to break that mindset. In order to do that I suggest the following:

1. The question "when is it going to be done?" should be the last question you ask, if you ask it at all. The more emphasis you place on the schedule, the more the team will believe it's what you care about and what you're measuring them on.

2. Focus heavily on "quality oriented" activities - code reviews, unit tests, performance tests, etc. And don't ask about these in a perfunctory manner - dig in. Ask to see code review comments (if possible these should be captured in the source code control system). Ask about any challenges people had writing tests for the new code - where there database dependencies?; how did we simulate "something" in the test environment?. Depending on the answers you get, you may be able to answer "when will it be done" by yourself. And you'll have better insight into where the team is putting their emphasis.

3. If and when the team asks for more time to insure that the story they are delivering is "Done-Done" (i.e. of high quality), give it to them without reservation. Don't make a face, don't roll your eyes, and don't be reluctant about it. I have had teams ask for more time for testing when in reality they needed more time to finish coding. This is not acceptable, partly because it will lead to short changing testing and code reviews, but mostly because it is a breakdown of trust.

Lack of Openness

Trust is a large part of any development project, but especially an agile project. Lack of trust hides information that needs not to be hidden and creates silence where conversation is critical. Trust goes hand-in-hand with courage, another key agile value. Team members need to have the courage to raise difficult issues, and the only way most team members are going to have that courage is if they trust the rest of the team, and management, to react appropriately. At least speaking for myself it can sometimes be hard to separate my gratitude at having the issue identified and surfaced with my frustration that it occurred, especially if it was preventable. But you have to focus on the gratitude and deal with solving the issue and moving forward. Save your concerns for later. These are often good items to deal with in the Sprint Retrospective.

In Conclusion

If the team thinks that the schedule is the most important priority you're at risk of ending up with poor quality, and a slipped schedule. If they understand that quality is the priority you'll have a better chance of hitting your schedule, as well as better quality. It seems like a "no-brainer", but if your team is hearing "schedule" when you say "quality" you need to ask what you are doing that is contributing to that. And this is your contribution to trust and openness. If you're doing something that is creating the wrong impression with the team you need to address it with them, acknowledge your mistake, and re-confirm your and the team's common goals.

- Posted from my iPad