Chapter 5 – Using Data Effectively in Your Retrospectives


I was at a conference a number of years ago when I saw someone walking around in a t-shirt that caught my eye. It had one very simple quote on it that has stuck with me ever since:

Of course, my mind immediately went to retrospectives! So many people don’t bring data to their retrospectives. What happens in a retrospective without data?

A lot of the time, people end up presenting their perception of the facts as the absolute truth, because it feels real to them! This is not deceptive behavior … everyone experiences the world differently and their version of the facts feels real to them. But without a common set of data, it is hard to find out what is objectively true vs. what is subjective reality.

That’s why stopping to Gather Data is important. It gives The Team an opportunity to establish a shared understanding of what happened, which enables more meaningful conversation about how to improve going forward.

So what data should you bring to your next retrospective? And how might you use it effectively?

A Retrospective Without Data

Let’s start with a story. Imagine your team has just sat down for it’s latest retrospective…

“Alright everyone,” Sophia, your Scrum Master, says. “Let’s take the next few minutes to silently write down the top 2-3 problems you think the team faced during the last two week sprint. I’ll set a timer and let you know when time’s up.”

Everyone on the team thinks to themselves about the latest iteration. Bob starts writing things down immediately. So does Sally. A few others sit and stare into space, wondering “what problems did we face, anyway? I’m too bogged down in my work to remember what happened a few days ago let alone last week.”

After the timer dings, Sophia says, “Ok, folks. What have we got?”

Bob, the team’s biggest extrovert, is the first to speak, as always. “The reason we didn’t finish all of the items in our sprint backlog is that we have so many bugs to deal with. It’s hard to deliver on our sprint goal when bugs keep us bogged down.”

Sally responds, “Maybe, Bob. But to me the main issue isn’t bugs, though those are annoying. It’s that we always underestimate how long it will take to deliver on our sprint backlog items, and end up over committing.”

Samuel is the next to speak. “I actually disagree with both Bob and Sally. Yes, we have bugs. And yes, we seem to underestimate how long it will take to accomplish our goals. But the biggest problem is the interruptions. Our boss always seems to add unrelated tasks mid-sprint and that’s distracting us from getting our work done.”

And, scene.

If you were Sophia, the team’s Scrum Master, what would you do next? How would you know what the team should discuss first? Think for a moment before scrolling down.

Hi again! 😊 So, what will your next step be?

When I’ve presented this scenario to people in the past, the most common response I get is: “I’d use dot voting to prioritize the discussion.” Which is great! Dot voting helps use the collective intelligence of the team to figure out what’s most important.

But what if the collective intelligence of the team is wrong? Is there a better way?

Using Data In Your Retrospective

As you might have guessed given the title of this Chapter, one thing Sophia could have done is gathered and presented some relevant data before asking the team to analyze its impediments. This aligns closely with The 5 Phases of an Effective Retrospective in which Gathering Data comes one step before Generating Insights.

Gathering data creates a shared picture of what happened. Without a common picture, individuals tend to verify their own opinions and beliefs. Gathering data expands everyone’s perspective.

Diana Larsen and Esther Derby, Agile Retrospectives: Making Good Teams Great

In this particular case, imagine if The Team had three poster boards hung around the room before the retrospective even started:

  1. One with data about bugs
  2. One with data about estimates
  3. One with data about focus time, or lack thereof

(Yes, I know these are vague. I’ll get into specifics in a minute!)

After kicking off the retro by Setting The Stage, Sophia could have next asked The Team to turn its attention to the data hanging around the room. Importantly, this would have happened before The Team was asked to share what they felt was most important to discuss.

Perhaps after looking at the bug data, Bob would have realized that bugs aren’t actually a big issue for the whole team, but instead something important mainly to Bob because he’s the one who always volunteers to fix them.

Or perhaps Sophia would have realized that The Team’s estimates are more on-target than she imagined. It’s just that there’s always a mad rush at the end of the sprint to get everything done, and she’s always the one to pick up the load.

But no matter what The Team finds, by analyzing the data together, they will have built a shared understanding of the facts. And this will enable them to have a more productive conversation during the rest of the retrospective.

Don’t miss any updates!

Signup to be notified whenever we update the Retrospectives Academy.

So What Data, Specifically, Should We Bring?

In her course Powerful Retrospectives, Esther Derby shares that there are two categories of data that you can utilize in your retrospective. The first is Objective, or Hard, Data. The second is Subjective, or Soft, Data.

Let’s focus first on Objective Data.

Objective (“Hard”) Data

Objective Data is sometimes referred to as “hard data”. Objective Data is any information that can be measured and verified.

There is nearly a limitless amount of Objective Data you can bring to your retrospective, but let’s dive into a few that I’ve found to be particularly helpful.

1. Burn-Down Chart

If you’re using Scrum, you almost certainly have a Sprint Burn-Down Chart readily available. If it’s on a physical sheet of poster board, hang it around your conference room before the retrospective starts. If it’s in a tool like Jira, draw it by hand or print it out.

How will your Burn-Down Chart help? Let’s dive into a few examples to find out.

Scenario 1: Burndown showing we were ahead of schedule and then fell behind

Remember, in a Burn-Down Chart, the y-axis represents the amount of work left and the x-axis represents time.

In this scenario, The Team started out fast and then fell behind, before catching up. To understand why, here are some questions you might ask:

  • Why were we ahead of schedule at the beginning of the sprint?
  • What did we do well that we could focus on repeating next sprint?
  • Why did our productivity flat line in the middle of the sprint?
  • What happened that we can avoid repeating next sprint?

Scenario 2: Estimated Time Remaining Increased

In this scenario, the estimated amount of work remaining in the sprint went up in the middle of the sprint. Why did that happen? Here are some questions you might ask:

  • Did new items gets inserted into the backlog mid-sprint? If so, how did that happen and what can we do to avoid it in the future?
  • Did our initial estimates from Sprint Planning underestimate the complexity of our tasks? If so, what can we do to understand them better in the future?

Scenario 3: A slow start

In this scenario, The Team started the sprint slowly. Not much work was getting completed. And then there was a mad rush to finish all the work before the end of the sprint. To understand why, you could ask:

  • What can we do early in the sprint to make more progress?
  • Did something happen at the beginning of the sprint that distracted us from getting our work done?

2. Average Cycle Time Plot

Another piece of Objective Data that is particularly helpful is Cycle Time. If you’re in the Lean or Kanban world, you likely already know why Cycle Time is such a powerful piece of data to have. If you’re following Scrum, you might be wondering, what is Cycle Time? It’s actually quite simple.

Cycle Time is the total time it takes for a task to be completed

For example, if you start work on a user story on Monday at 9am and finish it on Wednesday at 9am, the story’s Cycle Time was 2 days. (Complex math, I know.)

You can calculate Cycle Time for anything you work on — user stories, bugs, tickets, tasks, etc. Once you have the data, you can plot it like this:

Here’s how to read this chart. On the y-axis you see time. This represents the number of days it took for a work item to be completed. On the x-axis you see day of the week. This represents the day that the work item finished.

So for example, the dot above Monday represents the fact that The Team completed a work item on Monday and it took them about 3 days to complete. Hence, that work item’s Cycle Time was 3 days.

Now that you understand how to read the Cycle Time plot, take a look at it again. Before scrolling down to get my take, what jumps out at you?

When I look at this chart, two things jump out at me pretty quickly.

  1. The Team has a consistent delivery cadence. On average, this team completes its work items in about 5 days. You can see that in the graph because the vast majority of dots in the plot hover around 5 days.
  2. There is a major outlier on Wednesday. One of The Team’s work items took almost 20 days to complete!

If I were in this team’s retrospective, I’d want to dig in. Why was there an outlier? What work item was this? What extenuating circumstances were there that caused this work item to take so long? Is there anything we can do to prevent this from happening again?

Another Cycle Time Example

Let’s examine another Average Cycle Time plot.

What jumps out at you with this plot? I immediately see two things:

  1. The average cycle time is increasing as the week goes on. On Monday, the one work item that was completed took about 2 days. On Tuesday the average increased to 4 or 5 days. And by Friday, the cycle time was up to about 12 days. What happened? Why is it taking longer and longer to complete our work as the sprint goes on?
  2. There is an outlier on Thursday. Despite the increasing trend to our average cycle time, on Thursday something good happened: our work item was completed in roughly the same amount of time as the work item we completed on Monday! Was this just by chance? Or did we do something different on that work item that we should try to duplicate next time?

Both of these topics would be great discussion points in this team’s retrospective.

3. Time In Status

Let’s look at another type of Objective Data that I’ve found to be particularly helpful.

Imagine you have four steps to your development process. First, a task is selected for development (perhaps in Sprint Planning). Then, some amount of time passes before development actually begins. Once the developers believe the task is done, it moves to QA. After QA is complete, the developers put in a Pull Request for final technical review.

Visually, here is what the process looks like:

1. Selected For Development =>
2. In Progress =>
3. In QA =>
4. Pull Review

Imagine now that you’ve noticed your development process has slowed down. In other words, the Cycle Time is increasing. Wouldn’t it be useful to know why? To know where the bottlenecks in your process are?

That’s where the Average Time In Status plot can come in handy. Here’s what it looks like:

Here’s how to interpret this chart.

On the x-axis is time (in this case over the course of an entire year) and on the y-axis is the number of days.

Each line represents one of the steps in this team’s process. Blue for “selected for development”, green for “in progress”, and so on. And the value of each line shows the average number of days a work item spent in that step of the process.

So for example, in January, the average work item spent just under 10 days in development and just under 5 days in QA.

Now that you understand how to read the graph, take a look again at the graph. What jumps out at you?

I immediately find two things of interest:

  1. There was a big spike in June in the amount of time work was in progress. For some reason, the work in progress in June took just under 25 days to complete, whereas in other months work took roughly around 10 days to complete. Why did this happen? Did we change our process? Hire anyone new? Was this a one-off outlier that we can safely ignore or something important that we should discuss so it doesn’t happen again?

  2. The time it takes for pull requests to be completed is steadily increasing. Take a look at the red line in the graph. The amount of time for PRs to be completed keeps going up. What’s going on? Are there too many pull requests waiting for review? Is the code getting too complex to easily understand? Is the person responsible for reviewing PRs busy with other tasks?

4. Business Data

In the previous three examples of Objective Data, we’ve looked at technical and engineering information specific to your team. But if the purpose of building software is to deliver value to your customers, then it makes sense to also inspect business data in your retrospectives!

Is all the work we are doing as a team having an impact? Are our customers happier as a result of the work we are doing? Did we cause revenue to go up with a new product feature or enhancement?

What business data should you look at? A good place to start is with the metrics most closely associated with your company’s top 3 measurable business goals for the year. If you don’t know them, ask your manager!

Example 1: Net Promoter Score (NPS)

For example, you might learn that the business is focused this year on increasing your Net Promoter Score (NPS). Whether you know the name NPS or not, you’ve almost certainly seen the single question that all NPS surveys ask: “How likely are you to recommend this product to a friend?”.

By looking at NPS data across time, you can tell whether the product features you are adding are resulting to increased happiness among your customers. If not, why? Perhaps you are prioritizing the wrong user stories, or perhaps your Product Owner isn’t talking frequently enough with your customers to know what they actually value. Without looking at NPS, you’d never know.

Example 2: Churn Rate

Here’s another example. Suppose your company sells a subscription product (like a cellphone plan or a food subscription box). Your leadership is focused this year on keeping customers for longer and they use a metric called “churn rate” to see how many customers cancel every month.

When they give you churn rate data, you see that over the past year, churn rate has actually been increasing! Then you look at the features you have delivered this year, and realize that most of them have been focused on making it easier for new customers to get value from the product, rather than on keeping existing customers happy. And the one time you added a feature focused on decreasing churn, it didn’t have any impact at all!

By connecting the dots between the business and the engineering team, you’ve discovered something really valuable. It’s unlikely you would have discovered this otherwise.

Don’t miss any updates!

Signup to be notified whenever we update the Retrospectives Academy.

Other Types of Objective Data

You can see how using Objective Data can help your team focus on what’s most important to discuss. The difficulty with Objective Data is that there are many different types of data you can collect and analyze. Here are some additional pieces of Objective Data you can consider using in your retrospectives:

Velocity across sprints
As you likely know, your team’s velocity is the number of completed story points over an iteration. If you track this across time, you will be able to analyze your team’s trend. Your goal should not be to increase velocity every sprint. Instead, if your velocity is increasing, ask why. If your velocity is decreasing, ask why.

The increasing velocity you should not be your team’s goal! Instead, it might be something to analyze.

Amount of time spent in meetings
Meetings aren’t inherently good or bad. Some meetings add value and others don’t. But it is certainly true that the more time spent in meetings, the less time you have for other work. If you see the amount of time you are spending in meetings is increasing over time, ask why. Maybe this extra time in meetings is needed, and maybe it’s not. If you see the amount decreasing over time, ask why. This might not be a good thing! (Or maybe it is.)

Number of new support requests over time
Do support requests frequently interrupt the team? Track the number of new support requests over time. If it’s going up, maybe it’s a sign of bugs in the system. Or maybe it’s a process problem — some of these support requests could have been handled without ever reaching the development team.

Percent of time spent on bugs, features, ad-hoc requests, etc
Building software requires long periods of heads down time. Some people call this focus time. Others refer to it as being “in the flow”. If your days are full of interruptions, your team’s productivity will likely decrease. The problem is that asking your team to track its time is onerous! Joe Wright, a software development coach, suggests using Legos to track the team’s time. Here’s a pic from twitter of what that might look like in practice:

There are many more types of Objective Data you might consider analyzing. Think about what is relevant to your team. Bring whatever that is to your next retrospective, and see what happens.

Subjective (“Soft”) Data

Subjective data is sometimes referred to as “soft data”. Analytical people sometimes scoff at Subjective Data (“just give me the facts, who cares about how we feel“), but people are emotional beings and sometimes it’s impossible to fully understand what happened using Objective Data alone.

Subjective Data includes personal opinions, feelings, and emotions on the team. Whereas Objective Data presents the facts, Subjective Data can reveal what your team thinks is important about the facts.

Like with Objective Data, there is a nearly limitless amount of Subjective Data you can bring to your retrospectives. Here are a few specific examples of Subjective Data I’ve found to be most helpful.

1. Highs and Lows

Throughout your iteration, your team members will experience tons of different emotions. Sometimes they will be happy. Sometimes they will be motivated. Other times, some people on your team might feel annoyed or frustrated. And so on.

It’s important to recognize that the emotional state of your team will likely have a big impact on its productivity.

Here’s an example of how that might play out. Suppose your team had a particularly bad sprint and was unable to deliver on the Sprint Backlog.

As you Gather Data, you ask your team to take a look at the Sprint Burn-Down Chart (which, remember, is Objective Data):

What happened? Why was the sprint a failure? If The Team was looking solely at Objective Data, it might then look at the accuracy of its estimates or analyze the Git commit log. Which is great! Do that!

But what if you asked your team to map out how it felt during the same period of time?

Here’s how that works. Simply ask everyone to put a dot underneath the Burn-Down Chart representing how happy or sad they felt at various points during the sprint.

You’ll notice that at the beginning of the sprint, The Team was happy! Things felt great, even though The Team was behind according to the Burn-Down Chart.

And then … something happened. All of the sudden the entire Team felt bad. Why?

Maybe it was something internal: perhaps The Team realized it had underestimated the complexity of a user story and got frustrated because it realized it would never be able to deliver on time.

Maybe it was something external: The Team learned that their request to collectively attend a conference was denied yet again by senior management.

But whatever happened is worth discussing, and without mapping The Team’s emotions, it’s likely you’d never have that conversation.

In fact, according to Diana Larsen and Esther Derby in their book Agile Retrospectives: Making Good Teams Great:

Creating a structured way for people to talk about feelings makes it more comfortable to raise topics that have an emotional charge. When people avoid emotional content, it doesn’t go away; it goes underground and saps energy and motivation.

Keep in mind that you can map more than just the team’s happiness across time. You could measure engagement, empowerment, satisfaction, autonomy, or any other emotion you want to consider.

People don’t speak up in your retrospectives?

2. Mad Sad Glad

Mad Sad Glad in Retrium

This popular retrospective technique helps highlight your team’s emotions. To run Mad Sad Glad, simply setup three poster boards around the room titled Mad, Sad, and Glad. Ask everyone to privately write on sticky notes what they felt Mad about, what they felt Sad about, and what they felt Glad about. Once everyone is done brainstorming, have everyone place their sticky notes up on the board.

Then, ask your team questions like:

  • What patterns do you see? Are most of the notes in one column or another?
  • What surprises you about the results?
  • Did multiple people add sticky notes about the same event? What does that mean?
  • Are there any events that led to a disparity in emotions? (Someone felt Glad about the same event someone else felt Mad about, for example.)

3. Liked, Learned, Lacked, Longed For (“4Ls”)

4Ls in Retrium

4Ls, originally created by Mary Gorman and Ellen Gottesdiener, is similar to Mad Sad Glad in that it asks your team to think through how it felt and write down responses on sticky notes. 4Ls asks your team:

  • What did you like about the iteration?
  • What did you learn during the iteration?
  • What did you lack during the iteration?
  • What did you long for during the iteration?

After The Team is finished brainstorming, you can optionally split into breakout groups of 2-4 people to discuss the results, before reporting back to the entire team.

4. Team Radar

Sometimes you’ll run into situations in which certain members of your team push back on the use of Subjective Data. “Let’s focus on the hard facts instead of all this mushy feelings stuff,” they might say.

If that’s the situation you find yourself in (and even if not), you can use Team Radar to create Subjective Data that is more quantifiable.

Team Radar is a technique that uses individual numerical ratings to provide a sense of how the overall team is doing on various aspects of its work. For example, you might run a Team Radar based on the 5 Scrum Values of commitment, courage, focus, openness, and respect.

You’d ask everyone to think about how well they think The Team is doing on each of these five values, and then write down a rating from 1 (“poor”) to 5 (“excellent”) for each one.

You can then map out the responses in a radar diagram:

Scrum Values Radar in Retrium

Once you’ve collected this data, ask everyone what they notice. Does anything surprise you? Two things jump out at me:

  1. Commitment is a problem. Out of all the scrum values, commitment received the lowest rating across the board. In fact, everyone single person rated it a 1!
  2. There is disagreement on The Team around focus. While some people rated focus highly, one person rated focus poorly. Why? What about each person’s experience on The Team caused this level of disagreement?

From this information, you can start diving in deeper on a particular topic. For example, you could spend the rest of the retrospective diving deep on how the team could increase its commitment going forward. Or you could talk about why The Team’s courage is so great and what has to be done to maintain that going forward.

But no matter what you focus on, The Team now has a shared understanding that it didn’t have before.

Your team wants to skip the retrospective?

5. Business Data

A lot of business data you’ll have access to won’t be objective, but will still be incredibly useful to bring to your retrospective.

Example: Annual Employee Survey Results

Suppose, for example, that you work at Company Alpha, which recently released the results of its Annual Employee Survey. The survey asked a number of questions around employee engagement, including:

  1. I am proud to work for Company Alpha
  2. In three years, I will still be working at Company Alpha
  3. I would recommend Company Alpha as a great place to work

When you took a look at the survey results, you found some great news: across the company, employees seemed to love working there! 🥰

But the survey also broke out responses by division, and it turns out that the division you worked for had the lowest level of employee engagement out of any division in the company.

This would be fantastic data to bring to your next retrospective. Ask the team: why? What are we doing that is causing engagement to be lower than elsewhere in the company? Is there anything under our control that we can change? If not, who should we talk to?

Next Steps

As with Objective Data, the sky is the limit in terms of what Subjective Data you can collect and use in your retrospectives. Use your imagination! And if you need some help, a great place to start is by looking at the various activities for Gathering Data over at Retromat, a website that maintains a list of various retrospective techniques.

Things To Be Aware Of

Now you’ve seen the power of using data in your retrospective. It all sounds great, right? Data helps your team focus on what’s most important to discuss. And it gives your team a shared understanding of what actually happened. What could go wrong?

It turns out, a whole lot. In her course Powerful Retrospectives, Esther Derby identifies a number of “anti-patterns” to watch out for:

  1. Do not bring data that focuses on individual team members (at the detriment of others)
    For example, you might collect data around the introduction of bugs into the repository. Who committed which bugs and who fixed them? While this data might be useful in other contexts, the retrospective should not be used to compare and contrast individual performance. What to do instead: bring data around the total number of bugs introduced to the repo and the total number of fixes issued across the entire team.

  2. Do not falsely present Subjective Data as Objective Data
    For example, you might suspect that some team members are working on side projects during the sprint (at the request of management) without alerting the rest of the team. You believe this might be reducing down the team’s throughput. You haven’t collected Objective Data to know for a fact that this is happening, it’s merely a suspicion. Don’t tell the team that this is a fact! What to do instead: bring this issue up while your team gathers Subjective Data, so that your teammates know its an unproven opinion at this point.

  3. Do not collect more data than you need
    Collecting and tracking data takes time and effort, so it’s important to track only the data you need and nothing more. I bet you can think of a time when data was tracked without a clear purpose (“that’s just how we’ve done it in the past”). If there’s no reason to collect data, then don’t.

  4. Do not share data outside The Team without permission
    Metrics, in and of themselves, are neither good nor bad. But metrics can be abused and misused. A classic example of this in the agile world is weaponized velocity. Some teams are judged by whether their velocity is increasing across time, instead of allowing teams to use velocity internally without judging them for it. Be aware that once you start collecting all this fabulous data, managers, stakeholders, and others might want to see it. This could be good (if it’s used in a value add way for the team) or bad (if the team is judged against it).

  5. Do not spend too much time collecting the data
    I’ve seen teams who fall in love with analyzing hard facts in their retrospectives, so much so that they end up spending an inordinate amount of time collecting the data each sprint. Don’t fall into this trap! Data should be easy to collect. If it’s not, make it easy, or drop it.
Where’s the next chapter?

Unfortunately, it’s not ready yet 😒 The next chapter, titled “Psychological Safety & Trust”, will be available on February 28. Hang tight! (If you want to be notified when it’s ready, sign up to receive our newsletter below.)