Tuesday, September 18, 2012

Agile User Group Recap: Dealing With Technical Stories on the Backlog

At September's Nashville Agile User Group meeting we had a discussion about how to deal with technical user stories on the backlog. There was a great deal of conversation with many good points from multiple approaches.

First we talked about what we meant by a "technical user story". We referred to stories having to do with technical activities that did not deliver any new value to the end user. Some that were brought up were things like "upgrading servers or other supporting applications", "refactoring a module to be easier to maintain", etc.

It was brought up that even though the story may not add value to the end user, it usually could be shown to have a tangible value to the company as a whole. An example was if we have a module that requires our operations folks to go onsite and manually perform some task on the servers, but we could rewrite this to be managed remotely, that could be quantified by how much time (i.e. money) we would save for the operations folks.

In that same vein it we lamented that technical people often do not communicate the value well to the Product Owner and other stakeholders so these items are left off or always pushed out (sometimes permanently). Technical user stories need to be written so that their value is clearly outlined.

At this point I mentioned that I had seen several instances online of very well respected Agile thought leaders saying never to put technical stories on the backlog. one such instance was recently on the Scrum Development Yahoo group where Ron Jeffries said:
"We do not, repeat DO NOT, put code improvement "stories" in the Product Backlog."
Well that seems fairly clear, right? However if you read his entire post he directly follows the line above with:
"The main reason for this is that we cannot state the value of the improvement in any terms that the Product Owner can understand, and therefore the Product Owner can never schedule the stories. P.S. We don't know the value either." 
So it seemed like we were on the right track when we were talking about making sure the technical story had a demonstrable value. He goes on to say:
"If we do put code improvement stories in the backlog, we can't count the work against our Product Burndown or other progress measurements."
So it does seem that he is okay with technical stories as long as you don't count the work. This lead us down the trail of not adding the story points for technical stories to the team's velocity for a sprint. It was pointed out that Pivotal Tracker identifies these types of stories as "chores" and their points are not included in any sprint's velocity.

That sparked the debate over velocity being a metric of the team's productivity or a measure of value added each sprint. As it turns out you can find that the Pivotal Tracker folks had the very same discussion about this years ago.

I had just been talking to one of the other members about release planning right before the meeting so I added my two cents in that I prefer to not only estimate technical stories like these, but I do include them in the team's velocity because I was a metric that is a good reflection of what the team can do in the sprint to make a release plan. (I blogged about the way I like to create release plans here.) In that case I very much want to use velocity as a measure of what the team can produce.

Where we have used velocity that way we have also used a product burn up (or down) chart where we differentiated the technical stories (and bugs) from the user stories so that you could see how the team was doing as far as adding value to the user each sprint versus fixing existing stuff or working on these technical chores. To me that met Ron's criteria that you don't count the effort spent on technical stories towards an measure of progress.

Another side conversation came up when we added bugs into the mix. Do you point bugs? Do they go on the backlog with the user stories? Do fixed bugs count towards velocity? We did not get into that too deeply since that is an entire conversation in itself. Here just one instance of someone having a totally different view on how to tackle bugs, but I could easily find tons more.

When all was said and done all the attendees filled out our meeting survey which includes the questions "What did you take away from today's discussion?" Here are a few of those answers:

  • "Have reports that can show value delivered as well as how much work was done."
  • "Bugs are not really bugs until they make it into production." (Side note: this reflects part of the bug conversation where many of us talked about fixing bugs you find in the current development of a sprint immediately and tracking them more using the Sprint Backlog and our test cases. We only logged a bug into the defect tracking system if it was in released code which is what we consider any code done at the end of the sprint.)
  • "Learn how you want to track bugs relative to velocity."
  • "There are multiple ways to tackle the same problem." (You said it brother!)
  • "Bugs with zero story points make it difficult to track them in release planning."
As usual it was a good, spirited discussion and I look forward to our October meeting which is going to be about "Engineering best Practices in Agile." If you are in the Nashville area, sign up on our website to get notices about our meetings and upcoming events.

Thursday, September 06, 2012

Agile Release Planning 101


In this post I want to go over the mechanics of a basic Release Plan. I am not going to get into the principles and values behind Agile release planning in very much depth, but there are some links at the end of the post to some other online resources that cover that.

This is but one way to accomplish a Release Plan and I am sure there are many other techniques out there. I have used this approach with many teams of various sizes on many projects of various sizes and complexity. I have extended it in many instances, but the basics have remained the same and served me well. I would love comments about how you accomplish this with your own teams.

The Product Backlog

To start with you need a Product Backlog. To keep things as simple our backlog will have only four fields:

  • ID - A unique identifier for each item in the backlog. In my example I started with 1000 so that it is not as easy to try and use ID as a default priority field (which I have seen done before).
  • User Story - This is just a simple title for each story and not the full story. In a more in depth backlog you would have the full story, conditions of acceptance, possibly types of categorization, etc.
  • Backlog Priority - This field notes the order in which the Product Owner wants the items to be delivered. This number should not have any more implied logic embedded in the number (more on that later).
  • Estimated Effort - The team's estimated effort for completing the item. We won't get into using Story Points, developer days, hours, etc. Suffice to say this should be a high level estimation originating from the Scrum team in your metric of choice.

Adding User Stories

There are many techniques for creating your User Stories such as story workshops, customer focus groups, etc. Again, not something I am covering here. However the Product Owner gathers them, he will record them in the backlog. Normally when we are brainstorming on stories, we simply record them  in the order in which we think of them and do not try to categorize or prioritize at that point. That can sometimes hinder the flow of ideas. So you should wind up with just a list of stories. Here is a simple example of an initial backlog using Excel (the simplest tool you can use next to sticky notes on a wall).


Initial High Level Prioritization

Once you have your list some say you get the team to estimate them, but I advise you to do an initial round of high level prioritization to make sure that you maximize the team's time by looking at the ones you believe to be the most important first.

Do not make this more difficult than it needs to be. Use some simple metric with only a few values like: must, need, want. Or use 1,2,3. Or use High, Medium, Low. It really does not matter at this point and we've already discussed it too much. For my example I will use 1,2,3 since it is easy to sort in Excel.

Go down the backlog spending no more than a minute to discuss each one and give it an initial priority. Make sure everyone involved in helping with this activity knows this is not the final priority and that there will be continuous grooming sessions through out the project where it can be reassessed and changed. This will help curb the desire to analyze each item to death.

Here's our backlog with the initial priorities (NOTE: I totally made these up so try not to read anything into the actual stories.)



Initial High Level Estimates

Once we have the backlog sorted by the initial priorities, take it to the Scrum team to have them provide some initial high level estimates. At this early stage in the process it is okay to use a similar approach as we did with priority by using an easy simple metric like Small, Medium, and Large. As with the prioritization before, these estimates are not the final word and the team will be asked to take a closer look at them in the future, so try not to spend too much time on them. Good luck with that since engineers like to be precise and it is galactically hard getting them to not be. You can use any of the commonly suggested estimation techniques like planning poker or whatever works for your team.

If your team feels confident that they can use their normal metric for estimation (like the Fibonacci sequence) then let them. These techniques I am suggesting are really best at the beginning when everything is fairly high level and you really want to just get some idea of what you want to do first and how big that effort will be. As I said before, you will continuously refine the priority and estimates over the project. If your backlog is fairly large, you may want to primarily concentrate on the top most items since they should be the most valuable. 

Below is our backlog now with our initial estimates.


Refine Backlog And Priorities

Now that the Product Owner has some idea of how big the backlog items are, they may want to move some items around or split up some stories. Once they have the backlog arranged in the order in which they would like them delivered, it is time to refine your metric for prioritization.

The backlog priority field should have no implied logic to it. I have seen numbers used like 1102 which meant the stakeholders wanted it delivered by February 2011. This is not a good technique as it puts pressure on the team to deliver to that date and implies to the stakeholders that is when it will be delivered. But that number is usually not based on a team's Velocity or based on any input from the team so it is a recipe for disaster. 

The best way to ensure that the field cannot be used in such a manner is to always start at 1 and force rank the entire backlog. Each time you rearrange the backlog, your force rank it again starting with 1 again. After you have executed an iteration, your force rank the remaining open items on the backlog starting from 1 again. This makes it pretty hard to embed any type of logic into the number.

Here is our Product Backlog now with a force ranked priority. 


Refining Estimated Effort

Just as we revisited our initial prioritization, bring the backlog back to the Scrum team to have them refine their estimates to your normal metric (unless you already did that from the start). Spend more time on the top most items since they are most likely to be played in the next few upcoming iterations. Then try to go as far down the backlog as you can (depending on the size of the backlog and your team's availability). In general your Scrum team should spend 10% to 15% of their time during a Sprint helping groom the backlog in this manner, so the items you don't get to in these initial rounds of estimating will eventually be addressed.

Now we have a Product Backlog that is force ranked and estimated to the best of the team's ability.


You will notice that some of the items on the backlog have rather large estimates. This can be due to many factors such as uncertainty, risk, pure size and complexity, etc. The Scrum team should determine a maximum size that they will allow into an iteration. Drawing this line gives the team a tool to push back on items they do not feel they could complete in a single iteration. The Product Owner should go back and break these stories up or provide answers to the team's questions to allow them to refine their estimates.

Large items at the bottom of the backlog are expected since we do not expect them to be delivered any time soon so we can revisit them later when they rise on the backlog and get closer to being played in an iteration. If you have large stories towards the top of the backlog, they need to be broken up until they are an acceptable size or moved down the backlog.

The backlog here has the larger stories moved to the bottom and the priority field force ranked starting at 1 again. For my example the Scrum team has determined that any story larger than 13 is too large for a single iteration.


Determining Initial Velocity

Velocity is the sum of effort the team completes in an iteration. At the beginning of a project you may not have any idea what your team's velocity will be. I've seen all sorts of suggestions online with algorithms and big math to figure it out based on iteration length, available days, team size, etc. Whichever technique you use, just know that you will most likely be wrong. And that is okay. After the first iteration you will have an actual velocity to start using and keep tracking it from there.

I suggest meeting with the Scrum team and talking through the items at the top of the backlog and just asking them team how far down they think they can commit to completing in the first iteration. Since this still stands a good chance of being wrong, don't spend too much time on it or hold the team too accountable to it.

For our example we will use an initial planning velocity of 15. We are also going to use an iteration length of 3 weeks and we are starting our project on 1/1/2013.

Creating the Initial Release Plan

With a prioritized and estimated Product Backlog and an initial velocity (which I like to call your planning velocity when used for release planning) we can make our first pass at a release plan. We simply start at the top of the backlog and work our way down summing the effort until we reach 15 (or right before we go over it). So if we have an item with an effort of 13 and then the next item has an effort of 3, then you should not put those in an iteration together since it would put you over you planning velocity of 15. When you reach the items at the bottom of the backlog that are too large to be played in an iteration, go ahead and stop for now. When doing this in Excel I use alternating colors to easily show the grouping of items into iterations as seen below.




Since we now have our items grouped into iterations based on our planning velocity, you can use a proposed project start date and the length of your iteration to add some dates to the groupings as well to get some idea of timing. Remember that this is very high level and is going to be constantly revisited each iteration so consider this a living document.



At this stage your Product Owner can pow wow with the appropriate stakeholders to talk about releases. Some companies release whenever they think they have enough value for their customers, so they will mainly ignore the dates at first and just see how far down the backlog they get when they reach the point where the functionality could be released to the customer. Then they can have some idea of when that may be, but of course each iteration this will be revised.

If your company does time based releases (like a regular release mid year, etc.), they can find the iteration end date before the release date. Then you can see what functionality would be completed and how much value you would be delivering to your customer. There may be issues where a group of functionality that needs to be delivered together as a unit to give the customer the best value spans your target release date and the Product Owner may have to make some tough choices.

Either way there will probably be some grooming of the backlog once this information has been hashed over. So the Product Owner may move items around, split them, combine them, etc. Anytime the Product Owner grooms the backlog, they should get with the team to have them revisit their estimates. This should happen even if they simply move some items around since some estimates may have had some assumptions on item B being easier to implement if item A was already in place and if you swap them then the estimates would need to be refined.

At this time you can choose to create a separate spreadsheet with just the items you want to slate for this upcoming release and use this as the working release plan. Then the Product Owner can move items on and off this separate list. This helps if you have a large backlog and only a subset is on the release plan. Keeping them separate can make it easier to manage them.

Revising the Release Plan Each Iteration

During each iteration, the Product Owner will continuously groom the product backlog by working with stakeholders to add, remove, refine, and prioritize stories as well as getting with the Scrum team for estimation. 

At the end of an iteration you can take the Scrum team's latest velocity and use it along with your historical velocities to adjust your planning velocity. One technique is to take the lowest 3 and average them for a worst case planning velocity and then average the highest 3 for a best case scenario. Giving stakeholders these two release plans with the range will help them not cling to individual dates in them as much.

The new release plan should be shared with the team and stakeholders each iteration. As you progress towards the target release the backlog and release plan becomes more refined and more accurate. The key thing is to realize that the release plan is like an Etch-A-Sketch that you shake up and recreate at the end of each iteration based on the latest empirical evidence.

Extending the Release Plan

This simple approach works well but in most cases you will need to extend it to meet your specific needs. Be careful to not add too much and make the release plan into something it is not. That is very easy to do. Here are some common extensions I have had to make for various reasons.

Adding Bugs - I have been in many a debate about how to treat bugs when it comes to the backlog and the release plan. Many people do not like to waste time estimating the effort on bugs since there is often a good deal of discover involved before you can make a decent estimate. In those cases I have seen them put bugs in their own backlog and just dedicate a certain percentage of the velocity for each iteration to addressing the most critical. If this is your approach as well, then you will want to make sure that you adjust your planning velocity to exclude the effort set aside for bug fixes in your release planning.

Unfortunately there are many companies that come into Agile with a significant backlog of existing bugs that need to be addressed. In those cases we did treat the bugs just like stories and put them on the backlog. They were estimated and prioritized just like the other backlog items. This incurs more overhead, but sometimes is warranted if you have to get a handle on the impact of your outstanding bugs. Since the bugs are now in the backlog, you can use your normal planning velocity during release planning. If you still have a significant amount of new bugs coming in too, then you may still want to adjust your planning velocity based on your historical average of new bugs introduced each iteration. Of course if you are in this situation you obviously have a quality issue and need to take other steps to shore up your quality processes.

When we have added bugs to the backlog when then also added a field called Type that could be either User Story or Bug so that we could differentiate them when viewing the backlog or release plan. You can also then colorize that field based on the value (green for user stories and red for bugs) so with a quick glance at the release plan you can get some idea of how much of that release consists of new features.

Categorization and Grouping - Often a group of stories make up a feature and need to be delivered together for that feature to add enough customer value to release it. Stakeholders like reports on those features that show when all of the needed stories are forecasted to be released or how many items in the feature group are complete versus open. This can usually be facilitated by just adding another column called Category. The problem arises when reporting needs start to require multiple or hierarchical groupings. You can try a couple of simple techniques for these cases like, adding more category columns (Category1, Category2, etc. or MainCategory, SubCategory, etc.). You can also stick with one field and use a delimiter, but that is not as easy to filter in Excel for simple reporting.

Baselines - It never fails that someone asks to be able to see how a current release plan compares to what it looked like in the past. This can be very dangerous in that it reflects an inability to embrace change which is a major value of Agile. If you are asked for something like this I would recommend diving into why they need to know it and what they plan to do with the information. Backlogs are supposed to change and since a release plan is simply an overlay on top of the backlog, it is meant to change as well.

Business Value and ROI - Business value is sometimes mistakenly seen as the only metric for sorting you backlog. It depends on what you mean. Normally business value is something tangible like adding feature X will bring in 10% more customers in the next quarter and result in $1,000,000 in additional sales. Can't argue with that. But what if the construction and support effort to get feature X out of the door is $1,500,000?

There are also cases where you may implement a feature for less tangible reasons. You may have a very important beta customer who is very helpful in testing candidate releases. They may have a feature they really want, but it is not as applicable to your wider market. You may decide to implement this feature to maintain that relationship.

All this to say that there are many variables that go into deciding the "value" of a backlog item. I still think capturing the estimated value of an item is worthwhile, I just use it as another input to setting the overall priority of the backlog. Try to keep it as a simple metric with some tangible guidance. One example would be a rating of 0 to 1000 (1000 be the highest value). Of course people will naturally try to say everything is 1000, but that is where the guidance comes in. Maybe the range or 750 to 1000 is for items that if they are not delivered in your next release will cost your company a significant amount of market share through the loss of existing or potential customers.

Using a scale like that also allows you to do some simple Return on Investment (ROI) calculations. Divide the business value by the estimated effort, to get some idea of the actual value gained versus the amount of effort it would entail to deliver it. As I said, this is a very simplistic approach to ROI, but it is good at a high level. You can then start to see that the item valued at 1000 but with an effort of 20 has less ROI that an item at 500 with an effort of 5.

Other Resources

There are other ways to approach release planning, but the key is to make sure whatever mechanisms you decide to use reflects the Agile values your company is striving to adopt. Here are some online resources that have some additional insight on the subject.