First we talked about what we meant by a "technical user story". We referred to stories having to do with technical activities that did not deliver any new value to the end user. Some that were brought up were things like "upgrading servers or other supporting applications", "refactoring a module to be easier to maintain", etc.
It was brought up that even though the story may not add value to the end user, it usually could be shown to have a tangible value to the company as a whole. An example was if we have a module that requires our operations folks to go onsite and manually perform some task on the servers, but we could rewrite this to be managed remotely, that could be quantified by how much time (i.e. money) we would save for the operations folks.
In that same vein it we lamented that technical people often do not communicate the value well to the Product Owner and other stakeholders so these items are left off or always pushed out (sometimes permanently). Technical user stories need to be written so that their value is clearly outlined.
At this point I mentioned that I had seen several instances online of very well respected Agile thought leaders saying never to put technical stories on the backlog. one such instance was recently on the Scrum Development Yahoo group where Ron Jeffries said:
"We do not, repeat DO NOT, put code improvement "stories" in the Product Backlog."Well that seems fairly clear, right? However if you read his entire post he directly follows the line above with:
"The main reason for this is that we cannot state the value of the improvement in any terms that the Product Owner can understand, and therefore the Product Owner can never schedule the stories. P.S. We don't know the value either."So it seemed like we were on the right track when we were talking about making sure the technical story had a demonstrable value. He goes on to say:
"If we do put code improvement stories in the backlog, we can't count the work against our Product Burndown or other progress measurements."So it does seem that he is okay with technical stories as long as you don't count the work. This lead us down the trail of not adding the story points for technical stories to the team's velocity for a sprint. It was pointed out that Pivotal Tracker identifies these types of stories as "chores" and their points are not included in any sprint's velocity.
That sparked the debate over velocity being a metric of the team's productivity or a measure of value added each sprint. As it turns out you can find that the Pivotal Tracker folks had the very same discussion about this years ago.
I had just been talking to one of the other members about release planning right before the meeting so I added my two cents in that I prefer to not only estimate technical stories like these, but I do include them in the team's velocity because I was a metric that is a good reflection of what the team can do in the sprint to make a release plan. (I blogged about the way I like to create release plans here.) In that case I very much want to use velocity as a measure of what the team can produce.
Where we have used velocity that way we have also used a product burn up (or down) chart where we differentiated the technical stories (and bugs) from the user stories so that you could see how the team was doing as far as adding value to the user each sprint versus fixing existing stuff or working on these technical chores. To me that met Ron's criteria that you don't count the effort spent on technical stories towards an measure of progress.
Another side conversation came up when we added bugs into the mix. Do you point bugs? Do they go on the backlog with the user stories? Do fixed bugs count towards velocity? We did not get into that too deeply since that is an entire conversation in itself. Here just one instance of someone having a totally different view on how to tackle bugs, but I could easily find tons more.
When all was said and done all the attendees filled out our meeting survey which includes the questions "What did you take away from today's discussion?" Here are a few of those answers:
- "Have reports that can show value delivered as well as how much work was done."
- "Bugs are not really bugs until they make it into production." (Side note: this reflects part of the bug conversation where many of us talked about fixing bugs you find in the current development of a sprint immediately and tracking them more using the Sprint Backlog and our test cases. We only logged a bug into the defect tracking system if it was in released code which is what we consider any code done at the end of the sprint.)
- "Learn how you want to track bugs relative to velocity."
- "There are multiple ways to tackle the same problem." (You said it brother!)
- "Bugs with zero story points make it difficult to track them in release planning."
As usual it was a good, spirited discussion and I look forward to our October meeting which is going to be about "Engineering best Practices in Agile." If you are in the Nashville area, sign up on our website to get notices about our meetings and upcoming events.
Post a Comment