Otto Weininger quote

"All genius is a conquering of chaos and mystery." - Otto Weininger

Sunday, December 30, 2012

Proximity to Requirements

In working with a set of teams that were consistently having problems delivering on their commitments a friend and colleague of mine described what he felt was a key problem by using the phrase "Proximity to Requirements". By this he meant keeping the chain of requirements, from their original source to the developer(s), as short as possible. His feeling was that the developers were too isolated from the end users, leading to a loss of fidelity as the requirements were translated through intermediate layers.

I think he hit the nail on the head in this situation. The teams, allegedly following an agile process, had no actual user representatives working directly with them. The client described what they wanted to a member of the account management team. From there it was handed over to a business analyst who wrote a high level design document that the customer signed off on. These were then handed off to a development manager who translated them into stories for the teams. The end result was software that didn't meet the clients needs and had to be reworked, often multiple times, before finally being accepted.

One obvious problem in the situation above is that it is very difficult to successfully implement an agile development process if the rest of the organization is still stuck in a waterfall model. The company culture has to change to become (more) agile overall. This can be a challenge when you need to have client sign offs and other contractual milestones, but there are solutions to these challenges. But that's a topic for a later post.

The concept of maintaining "Proximity to Requirements" should be something every agile team keeps in mind. Do you have a user representative as part of the team who can communicate user needs directly to the developers? If not that could be a red flag. And is that person still close to the client's needs? If they've been "embedded" with the development teams for a sufficiently long time they may have lost contact with the end users. I would recommend making sure that your user representatives (Product owners, product managers, UX experts, or whatever you use) have a chance to renew that contact. Have them involved with the sales and/or services teams from time to time to make sure their perspective is current. Get them out into the field interacting with current and prospective customers to make sure they stay current so they can be effective in their role. Whenever possible I also suggest getting the developers that kind of exposure. At a minimum can all of the developers actually run through an end-to-end demo of the product? I'm not suggesting having them do actual demos, since their time is better spent creating business value within the system, but understanding how the product works at the user level will put them into a better position to understand what they're being asked to build and will help insure that the entire team maintains that proximity to the real world requirements.

Saturday, December 15, 2012

Stop Testing and Start Thinking

On a stand-up meeting recently one of the developers said that a story was going to take longer than expected because "it's really complex and is going to require a lot of testing". Nothing wrong with finding out a story is more complex than expected and that it's going to take longer than expected. What struck me was that so often, in current software development, testing is our first answer in addressing complex software problems. Testing is an important part of software development, but I'm afraid that one of the things that we've lost in the move to agile methodologies is the concept of design. One of the principles behind the Agile Manifesto is "Continuous attention to technical excellence and good design enhances agility." but too often design is considered non-agile.

I see this most often when working with teams that are relatively new to agile, or with more mature teams that are stuck at a low level of agile maturity. They have heard about the agile principle of simplicity (maximizing the amount of work not done) and YAGNI (You Aren't Going to Need It) and they have put all their focus on coding. This can get these teams into trouble, especially in areas like scalability and performance, because some problems need to be thought through and designed before coding starts. The team should understand the overall targets that they need hit (expected perceived response time for the user; processing loads per unit time; etc.) but these are usually high level goals and are often not translated into acceptance criteria at the individual story level.

To deal with this I like to introduce 2 practices that are not universally embraced in the agile community: Sprint Zero, and Research Spikes.

Sprint Zero

Sprint Zero occurs at the start of a Release Cycle. The team identifies stories that are either too big, or where they feel there is a lack of understanding and/or technical risk and "drill in". This may mean breaking large stories into several smaller ones, or doing just enough design to give the team confidence that they understand the work. A key is to keep this sprint as short as possible. I've heard of teams spending 3-6 weeks in Sprint Zero, but my rule of thumb is that it should never be longer than a normal sprint, and preferably less. If the team is doing 2 week sprints then a 1 week sprint zero is what I shoot for. You are preparing to deliver business value, but aren't actually delivering any in this cycle which clearly indicates it should be as short as possible.

Research Spikes

In the sprint planning that starts each sprint the team will occasionally have a story that they are having difficulty breaking into tasks and/or estimating. Or there may be a story that is too large and they are having trouble breaking it into smaller stories (my rule of thumb is that a story shouldn't take longer than 1/2 of a sprint). These may be candidates for a research spike. Also, in backlog grooming, stories may be identified that the team considers high risk. These may also be good candidates. A research spike is a story where the output is not working code, but is one or more of the following:

  • A detailed task breakdown, with estimates, for the story

  • A design for the story, captured in the story itself or on the team's wiki, depending on the standards established for the team

  • An architecture for how to implement the story

  • A detailed test plan for the story


Research Spikes should be used sparingly. If you have a good backlog that is periodically groomed almost all stories should be able to go through the normal sprint planning process without issue. If you're doing research spikes more frequently than once every half dozen or so sprints you might have a problem with the backlog or with the team.

You're not directly delivering business value during Sprint Zero or while working on a Research Spike, so these need to be used sparingly. But they can be valuable tools in helping teams reach the next level of performance by moving beyond writing code and then trying to test it to completeness. Test driven development (TDD), if done well, is a reasonable compromise, because it requires the developer to think through the different aspects of the solution, but in my experience too few teams have adopted TDD and if they have they practice it with insufficient rigor.

I guess I could have titled this post "Stop Using Testing as a Substitute for Thinking", but it wasn't as catchy. Don't design everything, but when faced with a complex problem think it through. Draw a sketch of the interactions in Visio, or LucidChart, or on a piece of paper. You'll throw it away later, so don't make it pretty. It's value is to get you thinking of the different aspects of the problem, which makes it more likely that it will be complete. Then, after coding it, do your testing. And if you find problems, maybe you didn't do enough thinking.





Wednesday, December 12, 2012

Why do you hear "Schedule" when I say "Quality"?

I was sitting in on a release retrospective recently when the following situation occurred. The release had created too much technical debt and we were reviewing one of the newly introduced defects where the handling of transactions was incorrect, causing deadlocks in production. When I asked the obvious question, "How did this get through the code review?" one of the developers blurted out "We didn't have time to do all the code reviews!".

The thought that struck me was that no matter how many times quality is emphasized as the top priority, the team really thinks it's schedule. In my experience, and talking to people in the industry, this is a very common problem.

This is a SaaS company, and they depend on the software to generate revenue, so it is clear that schedule is always a consideration, but it can be very difficult to convince the team it isn't the primary consideration. And this was a good case study. The team had never asked for more time and had never identified that the code reviews weren't done. These are separate issues, so let me address them separately.

No Time

Developers (and testers, and product owners, etc.) are used to working in environments where the schedule is master. Where you get a bonus for releasing software on time, whether it's buggy or not, and chastised for releasing quality software a little late. It's hard to break that mindset. In order to do that I suggest the following:

1. The question "when is it going to be done?" should be the last question you ask, if you ask it at all. The more emphasis you place on the schedule, the more the team will believe it's what you care about and what you're measuring them on.

2. Focus heavily on "quality oriented" activities - code reviews, unit tests, performance tests, etc. And don't ask about these in a perfunctory manner - dig in. Ask to see code review comments (if possible these should be captured in the source code control system). Ask about any challenges people had writing tests for the new code - where there database dependencies?; how did we simulate "something" in the test environment?. Depending on the answers you get, you may be able to answer "when will it be done" by yourself. And you'll have better insight into where the team is putting their emphasis.

3. If and when the team asks for more time to insure that the story they are delivering is "Done-Done" (i.e. of high quality), give it to them without reservation. Don't make a face, don't roll your eyes, and don't be reluctant about it. I have had teams ask for more time for testing when in reality they needed more time to finish coding. This is not acceptable, partly because it will lead to short changing testing and code reviews, but mostly because it is a breakdown of trust.

Lack of Openness

Trust is a large part of any development project, but especially an agile project. Lack of trust hides information that needs not to be hidden and creates silence where conversation is critical. Trust goes hand-in-hand with courage, another key agile value. Team members need to have the courage to raise difficult issues, and the only way most team members are going to have that courage is if they trust the rest of the team, and management, to react appropriately. At least speaking for myself it can sometimes be hard to separate my gratitude at having the issue identified and surfaced with my frustration that it occurred, especially if it was preventable. But you have to focus on the gratitude and deal with solving the issue and moving forward. Save your concerns for later. These are often good items to deal with in the Sprint Retrospective.

In Conclusion

If the team thinks that the schedule is the most important priority you're at risk of ending up with poor quality, and a slipped schedule. If they understand that quality is the priority you'll have a better chance of hitting your schedule, as well as better quality. It seems like a "no-brainer", but if your team is hearing "schedule" when you say "quality" you need to ask what you are doing that is contributing to that. And this is your contribution to trust and openness. If you're doing something that is creating the wrong impression with the team you need to address it with them, acknowledge your mistake, and re-confirm your and the team's common goals.

- Posted from my iPad

Sunday, December 9, 2012

Introduction - A Sense of Balance

One of the things that has always intrigued me about software development, initially as a developer and then as a software manager, is the balance required to be successful. Between engineering and art; between chaos and order. With all of the changes in hardware, software and process, the need to be able to find that balance point has been a constant.

Which brings me to why I'm writing this blog. I'm interested in exploring how we, as software managers, find that balance. How we provide leadership, management and organization to the typically chaotic work of software development, all without stifling the creativity that is so crucial in that environment.

This is a particularly important consideration now, with the increasing adoption of agile methods. In working with teams over the last dozen+ years I've found that an alarming number of them assume that "agile" means abandoning all semblance of process and documentation. In truth, agile methods are very disciplined, with specific practices that need to be followed. The difference with agile practices is that they emphasize activities (see the Agile Manifesto http://agilemanifesto.org/) that dance closer to the chaos side than "traditional" methods. This makes it too easy to "fall in", if you're not careful.

The only way to avoid that is to find that sense of balance, and that is what I want to explore in this blog. By examining different aspects of software development, and discussing how we meet the challenges that seem ever present, I hope that we can advance the art and the science of software management.

- Posted from my iPad

Location:Melcher St,Boston,United States