This is a little Twitter poll I'm running and thought I'd share the question here, too.

If you ask me, the retrospective is the most valuable ceremony that exists in Scrum practice. The concept of inspecting and adapting is one of the cores of Agile and without it, we would never get past the very basic level of efficiency. And that's just not very efficient.

I'm not going to prescribe a certain set of questions or even a specific format, but there are a few environmental things and practices that you can employ to make your retrospectives more effective.

Send the Invitation to Everyone

You and your team members are not the only people affected by your work. Stakeholders, external product owners, other teams that you collaborate with and anyone else that might have valuable input should be invited to your retrospective.

This isn't something I would recommend doing for each and every retrospective (this can make for a pretty full room), but at the end of a major release or project (or any other time that makes sense to your team) could be the perfect opportunity to hear an outside perspective on your team's performance. These "outsiders" may be able to provide valuable insight into different processes or practices that you could employ to further improve your team's work. It's also a good time to hear about the things you did right throughout the project so your team hears some validation from someone other than the Scrum Master.

In a normal retrospective, I recommend inviting the entire team. I'm not just talking about the engineers. Bring in your Product Owner! If you only hear from the engineers, you're only getting half the picture. If you're practicing Scrum, the Product Owner is part of the team and should be included in these inspect/adapt discussions.

Open Communication

This is such an abstract concept, but it's so vital to the spirit of the retrospective. Open communication is a two-way street. Team members need to be willing to talk and listen. Sometimes, people are going to feel beat up in a retrospective after a particularly painful sprint. This is the time when they need to listen to the feedback given by their peers the most. I know it's hard to listen to negative feedback, but as long as it's delivered a constructive way (remember this post where I said that Agile can't fix your people problems?), nothing but good can come from taking it in and making changes in yourself or your team.

Defensiveness is not allowed.

Active participation is encouraged.

Follow up!

What's the point of having an awesome retrospective discussion, if you don't change anything because of it? What have you just accomplished?

Hear this: If you say you're going to improve by doing X and then never follow up on X, you have done nothing but waste everyone's time.

As a team, decide what you want to change, document it and make it visible! Post it on the wall in your team's work area and talk about it throughout the sprint. Make it front of mind. Then, when your next retrospective comes around, ask the team how they did! Do they feel like they accomplished what they agreed to? Do they feel they did everything in their power to improve in that area? Can we consider the action item closed or do we need to continue to work on it?


So, use whatever format works for you team. Change it up. Have fun with the retrospective. But, don't lose the spirit of the retrospective. The point is to Inspect and Adapt. Neither of these things feel natural in a group setting, but they are vital to the health of an Agile team.

What other advice do you have for teams struggling with their retrospectives?

The terms "Acceptance Criteria" (or AC) and "Definition of Done" (or DoD) tend to be used interchangeably in the Agile community. Remember from my anatomy of a user story that you need to have a "how" included as part of your story. The AC and DoD are two ways of signaling to a developer that the work is "done." However, I see a distinct difference in these two concepts that I'd like to outline here.

Acceptance Criteria

Each user story is basically asking a developer to build out a specific bit of functionality or behavior. However, it's so easy to read a simple description of the desired feature and build out something that the requester never intended (by going too far or not far enough).

Acceptance criteria clearly outlines how the developer (or anyone reading the story) will know when the feature has been built to the desired level of completeness.

Here's a quick and easy example: We have a user story that is asking to build out a registration function for a website. The AC can be "A user can register a username and password that will allow them to log back into the website."

Simply put, the acceptance criteria tells us when the feature does what it is supposed to do...nothing more and nothing less.

Definition of Done

A good definition of done, on the other hand, gives a quick checklist, if you will, for the developer to know what they need to do in order to push their changes to the production environment. Maybe your team requires 100% test coverage before a change is allowed to be merged. Or maybe you want to specify that all integration tests are passed before the story can be considered "done."

Remember, a good user story should be independent. This DoD is a great way to ensure that stories can stand on their own, without dependency on another story. If DoD is complete, there is no reason (other than release schedule or choice) that the story can't be released on its own. The requirements may vary story-to-story or they may be static requirements that your team has agreed to for the entire project.

Do I Need Both?

I would argue that a user story is not fully complete without acceptance criteria AND definition of done. Now, that doesn't mean that you need a full DoD on each and every user story. If your team has defined a complete set of definition of done that is required for each and every story, having that posted on the team wall or somewhere else visible could be okay. The important thing here is that the person implementing the user story knows when to stop (or keep going).
I know there are a lot of templates out there for what a good user story should look like. I'm not going to go down that path, because honestly I don't care what words you use in your user story as long as you have all of the important information included. Here's what makes a good user story:

User Story Format


In order to build out a piece of functionality with a certain degree of autonomy, I need to be able to make decisions about the little things. The best way to know what the right decision is in any given situation is to know who you're building the functionality for. A single piece of functionality can have many different uses based on the user, so knowing who the user is makes the decision-making process a lot simpler.


This one is a bit more obvious. The person looking at the user story needs to know what is being asked for. Now, this shouldn't be a super-specific thing like "red button on the top left corner" or anything, but it needs to give enough detail that I know what's being asked for.


If you ask me, this is the most important part of a user story. Building out functionality is great, but I can go in a completely wrong direction if I don't understand the motivation behind the request. Not only that, but if I truly understand why you're asking for a particular piece of functionality, I may have ideas on how we can provide the value in a better way. That little bit of why can really open up a world of possibilities and discussions.


No, now how to implement the functionality requested. That's a major no-no. This how is really how do I know when I'm done? Each user story needs to include a definition of done and acceptance criteria, so that we know when the work is done. How do I know when I've gone far enough with the work and can say it's "done" to a satisfactory level?

So, there's the very basics of what information should be included in a well-written user story. You can write it in the "As a ..., I want ..., so that ...." format, or you can write it any other way you like, as long as this important information is included.
Estimation is one of those things within the Agile community that draws a lot of debate. This debate usually centers around the question of how to estimate. Should the team estimate in the relative concept of story points or in the absolute concept of hours or days? First, let's look at what the difference in between the two sides.

Story Points

As I mentioned earlier, story points is a relative sizing concept. In other words, it asks the question "How difficult or complex is this backlog item when compared to that backlog item?" 

To begin your relative estimation in story points, the team should simply look through your backlog and find the very simplest backlog item...the item that they can knock out with their eyes closed all day, every day with no problem. That backlog item is a "1" in story point estimation. That's your baseline.

Now, the next time you pull up a backlog item for estimation, think of it in relative terms to that baseline item. Is it about twice as hard as that super simple task? If so, call it 2 story points. The concept is that simple. You can do this type of estimation using story points, t-shirt sizes or any number of other techniques. The main idea is that backlog items are estimated relative to other backlog items.

Effort Hours

First, I want to direct you to check out Mike Cohn's book Agile Estimating and Planning for a great breakdown of Ideal vs. Elapsed hours. In Chapter 5, he writes about estimating in hours and gives a great perspective.

Basically, when you estimate backlog items in hours, you're making some assumptions. First, you're assuming that all of your time is spent on only this backlog item with no interruptions or distractions. You're assuming that no unknowns pop up unexpectedly. You're assuming that the backlog item would take the same number of hours, no matter which member of the team picks up the backlog item to work. These are some fairly dangerous assumptions and I would recommend staying far away from estimating backlog items in hours for these reasons.

I have a really hard time ever promoting the use of hours for estimation purposes, but Mike Cohn has, thankfully, already written a great blog post about why he doesn't recommend story points for sprint planning, so instead of trying to present both sides of the argument here, I will simply direct you to his post. If I tried to write about how hours are great estimation and planning tools, it would simply be a half-hearted and forced attempt to share an ideal that I don't believe in and that's not fair to you, dear reader.

Final Thoughts

You might have noticed throughout this post how I feel about the "right way" to estimate. This idea of a "right way" is just as subjective as the estimation process itself, so I urge you to read up on both sides of the debate and make the decision that is best for your own team's dynamics. As with all of Agile, what works best for one team may not necessarily work best for all teams and should be approached with an experimental mindset. Try doing it one way and see if it works for your team. If it does, awesome! If it doesn't, tweak your process or try another way altogether. Repeat until satisfied.
Technical debt (or tech debt) is a concept that is often misunderstood by those new to Agile development (and sometimes those that are not so new). It is a common misconception that tech debt is that "little bug" found in your code right before the end of the sprint that you don't have time to fix. That's absolutely not the case. That's a bug...a defect, plain and simple.

Let's break down exactly what technical debt is in order to better understand the concept. Ward Cunningham first coined the phrase back in 1992 when he explained the refactoring process using the metaphor of debt in a paper on the WyCash Portfolio Management System.

"Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite." - Ward Cunningham
Keep in mind that he's not saying that it is acceptable to allow a few bugs into the code as long as you can ship it promptly. He's saying that taking a "quick and dirty" approach that gets the job done with the intent of going back to clean up after yourself allows the product to get to market quickly.
photo credit: Cool Syntax Highlighting via photopin (license)
A "bug" is something that doesn't work as intended...something that makes your Definition of Done (you do have a Definition of Done, right?) not complete. I'll assume that one part of your Definition of Done is that your code passes all tests. If that is the case (and it should be), then that user story is not done. If you say "That bug isn't that big of a deal, we'll fix it in the next release," that isn't technical debt. That's a bug, my friend.

Ideally, we would always have time to build out a future-conscious design that may take longer to implement, but is the cleaner solution. However, this isn't always possible with time constraints and, let's be honest, we don't always (almost never) have a clear vision of the future state of the system that would allow us to know exactly how to architect it in a way that is future-proof.

Keeping with the debt metaphor, the longer technical debt remains without being refactored, you gain "interest" on that debt. That means that, if left alone, your once minor tech debt will eventually become a huge issue that impedes future development efforts. Tech debt needs to be addressed and "paid off" as quickly as possible.

One of the easiest ways to manage your tech debt is to devote a certain percentage of development efforts to refactoring, or "paying off," tech debt. A common practice in Agile development is to devote 20% of development time to tech debt. This might mean that you devote every 5th sprint to tech debt or that you include 1 tech debt user story for every 4 feature stories in each sprint. How you do it doesn't necessarily matter. What's important is that it gets done.

How does your team handle tech debt? Have ever experienced a case where tech debt was left unchecked and turned into a project in and of itself?
This is just me sharing a frustration of mine. A professional pet peeve, if you will.

A few years ago, I went to a 3 day course which was described as an Agile Boot Camp. Since the company I was with at that time was in the infant stages of an Agile transformation strategy, this seemed like a great course for the project management team (all 3 of us).

The boot camp was full of great information and really got us pumped about what we were about to do. We spent an entire day talking about sprint ceremonies and how to use them properly. We discussed user stories and their relationship to epics, features, et cetera.

The class was awesome, but there was one major problem which I didn't really notice at the time....The class wasn't an Agile boot camp; It was a Scrum Boot Camp.

Agile is so much more than just Scrum. It's so much more than Kanban or XP or SAFe.

Agile is the why. The framework (such as Scrum or XP) is the how.
Just the other day, Mike Cottmeyer wrote a really great post about why frameworks and methodologies don't matter. If someone asks you to describe Agile and you give them an overview of Scrum, you're really short-changing them by giving such a limiting view of the Agile Manifesto, which is the set of guiding principles for Agile.

Agile is a mindset and a way of thinking. The Agile values can certainly be implemented through the use of a framework like Scrum, but Scrum is not Agile!

If you were in an elevator with a colleague and they asked you, "What is Agile?" How would you explain Agile to them in 1 or 2 sentences? Leave me a comment with your "elevator pitch."