The most important things cannot be measured. The issues that are most important, long term, cannot be measured in advance.
—W. Edwards Deming
Working software is the primary measure of progress.
Metrics are agreed-upon measures used to evaluate how well the organization is progressing toward the portfolio, large solution, program, and team’s business and technical objectives.
Thanks to its work physics, timeboxes, and fast feedback, Agile is inherently more measurable than its proxy-based predecessor, the waterfall process. Moreover, with Agile, the “system always runs.” So, the best measure comes directly from the objective evaluation of the working system. Continuous delivery and DevOps practices provide even more things to measure. All other measures—even the extensive set of Lean-Agile metrics outlined below—are secondary to the overriding goal of focusing on rapid delivery of high-quality solutions.
But metrics are indeed important in the enterprise context. SAFe provides metrics for each level of the Framework. The links below navigate to the entries on this page.
- Lean Portfolio Metrics
- Portfolio Kanban Board
- Epic Burn-Up Chart
- Epic Progress Measure
- Enterprise Balanced Scorecard
- Lean Portfolio Management Self-Assessment
- Value Stream Key Performance Indicators
Lean Portfolio Metrics
The set of Lean portfolio metrics provided here is an example of a comprehensive but Lean group of measures that can be used to assess internal and external progress for an entire portfolio. In the spirit of “the simplest set of measures that can work,” Figure 1 provides the leanest that a few Lean-Agile portfolios are using effectively to evaluate the overall performance of their transformations.
Portfolio Kanban Board
The primary purpose of the Portfolio Kanban board is to ensure that Epics are weighed and analyzed before reaching a Program Increment (PI) boundary. This way, they can be prioritized appropriately and have established acceptance criteria to guide a high-fidelity implementation. Further, the business and enabler epics can then be tracked to understand their progress.
Epic Burn-Up Chart
The epic burn-up chart tracks progress toward an epic’s completion. There are three measures:
- Initial estimate line (blue) – Estimated Story points from the lean business case
- Work completed line (red) – Actual story points rolled up from the epic’s child Features and stories
- Cumulative work completed line (green) – Cumulative story points completed and rolled up from the epic’s child features and stories
Figure 2 illustrates these measures.
Epic Progress Measure
This report provides an at-a-glance view of the status of all epics in a portfolio.
- Epic X – Represents the name of the epic; business epics are blue, and enabler epics are red
- Bar length – Represents the total current estimated story points for an epic’s child features/stories; the dark green shaded area represents the actual story points completed; the light shaded area depicts the total story points that are in progress
- Vertical red line – Represents the initial epic estimate, in story points, from the Lean business case
- 0000 / 0000 – The first number represents the current story point estimate (summarized from its child features/stories); the second number represents the initial story point estimate (also represented by the vertical red line)
Figure 3 illustrates these measures.
Enterprise Balanced Scorecard
The enterprise balanced scorecard provides four perspectives to measure performance for each portfolio—although the popularity of this approach has been declining over time in favor of Lean Portfolio Management (LPM), as shown in Figure 1. These measures are:
- Value delivery
These results are then mapped into an executive dashboard, as illustrated in Figures 4 and 5.
For more on this approach, see chapter 22 of Scaling Software Agility: Best Practices for Large Enterprises .
Lean Portfolio Management Self-Assessment
The Lean Portfolio Management (LPM) continuously assesses and improves their methods. The LPM periodically conducts a self-assessment questionnaire to measure their performance, which automatically produces a radar chart like the one shown in Figure 6. It highlights the relative strengths and weaknesses.
Large Solution Metrics
Solution Kanban Board
The primary purpose of the Solution Kanban board is to ensure that Capabilities are reviewed and analyzed before reaching a PI boundary. This way, they can be prioritized appropriately, using established acceptance criteria to guide a high-fidelity implementation. Further, the features can then be tracked to understand their progress.
Solution Train Predictability Measure
The Agile Release Trains (ARTs) predictability measures are summarized to calculate the Solution Train’s predictability measure, as illustrated in Figure 7.
Solution Train Performance Metrics
The Agile Release Trains (ARTs) performance metrics are summarized to calculate the Solution Train’s performance metrics, as shown in Figure 8.
Feature Progress Report
The feature progress report tracks the status of features and enablers during PI execution. It indicates which features are on track or behind at any point in time. The chart has two bars:
- Plan – Represents the total number of stories planned.
- Actual – Represents the number of stories completed. The bar is shaded red or green, depending on whether or not the item is on track.
Figure 9 gives an example of a feature progress report.
Program Kanban Board
The primary purpose of the Program Kanban board is to ensure that features are reviewed and analyzed before reaching a PI boundary. This way they can be prioritized appropriately, with established acceptance criteria to guide a high-fidelity implementation. Further, the features can be tracked to understand the performance of the Continuous Delivery Pipeline.
Program Predictability Measure
The team PI performance reports are summarized to determine the program predictability measure, as illustrated in Figure 10.
The report compares actual business value achieved to planned business value (see Figure 22).
For more on this approach, see chapter 15 of Agile Software Requirements: Lean Requirements Practices for Teams, Programs, and the Enterprise .
Program Performance Metrics
The end of each PI is a natural and significant measuring point. Figure 11 shows a set of performance metrics for a program.
PI Burn-Down Chart
The PI burn-down chart shows the progress toward the PI timebox. It’s used to track the work planned for a PI against the work accepted.
- The horizontal axis of the PI burn-down chart shows the Iterations within the PI
- The vertical axis shows the amount of work, in story points, remaining at the start of each iteration
Figure 12 exemplifies a train’s burn-down measure.
Although the PI burn-down shows the progress toward the PI timebox, it does not reveal which features are lagging behind. The feature progress report provides that information (refer to Figure 9).
Cumulative Flow Diagram
The Cumulative Flow Diagram (CFD) is made up of a series of lines or areas that show the amount of work in different Kanban states. For example, the typical states of the program Kanban are:
- Validating on staging
- Deploying to production
Figure 13 shows the number of features in each Kanban state by day. The thicker areas in the CFD represent potential bottlenecks.
Agile Release Train Self-Assessment
As program execution is a core value of SAFe, the ART continuously strives to improve its performance. The RTE fills out the self-assessment questionnaire at PI boundaries or any time the train wants to pause and reflect on their progress. Trending this data over time is a key performance indicator. Figure 14 gives an example of the results in a radar chart.
Continuous Delivery Pipeline Efficiency
The pipeline efficiency compares the amount of touch time versus wait time. Some of the information can be sourced automatically from tools, especially Continuous Integration, and Continuous Deployment, while other data requires manually recording in a spreadsheet. The value stream mapping technique is often applied to analyze problems identified in this report
Note: The touch time represents when the team is adding value. Typically, touch time is only a small proportion of the total production time, most of the time is spent waiting, such as moving work, waiting in queues and so on.
Deployments and Releases per Timebox
This metric is meant to demonstrate whether the program is making progress toward deploying and releasing more frequently. It can be viewed on a PI basis, as shown in Figure 16.
Or we can zoom in to see how releases are handled in mid-PI, as shown in Figure 17.
Recovery over Time
This report measures the number of rollbacks that occurred either physically or by turning off feature toggles. The date when a solution was deployed or released to production is also plotted here to determine if there is a relationship between the two.
Innovation Accounting and Leading Indicators
One of the goals of the continuous delivery pipeline is to enable the organization to run experiments quickly to allow Customers to validate the hypotheses. As a result, both Minimal Marketable Features (MMFs) and Minimal Viable Products (MVPs) must define the leading indicators to measure progress toward the benefit hypothesis. Avoid relying on vanity metrics that do not measure real progress.
Figure 19 shows some metrics that were gathered from the SAFe website to demonstrate leading indicators for our development efforts.
Hypotheses Tested over Time
The primary goal of hypothesis-driven development is to create small experiments that are validated as soon as possible by customers or their proxies. Figure 20 shows the number of verified hypotheses vs. failures in a PI.
In an environment of quick testing (see  for more information), a high failure rate may indicate that the team is rapidly learning to progress toward a good outcome.
Each Agile Team gathers the iteration metrics they’ve agreed to collect. This occurs in the quantitative part of the team retrospective. Figure 21 illustrates the measures for one team.
Team Kanban Board
A Team Kanban process evolution is iterative. The team’s bottlenecks should surface after defining the initial process states (e.g., define, analyze, review, build, integrate, test) and Work in Process (WIP) limits, and after executing for a while. If not, the team refines the states or further reduces the WIP until it’s obvious which state is ‘starving’ or is too full, helping the team adjust for a better flow.
Team PI Performance Report
During the PI System Demo, the Business Owners, customers, Agile teams, and other key stakeholders rate the actual business value (BV) achieved for each team’s PI Objectives as shown in Figure 22.
Reliable trains should operate in the 80 – 100 percent range; this allows the business and its outside stakeholders to plan effectively. Below are some notes about how the report works:
- The Planned total BV does not include stretch objectives to help the reliability of the train
- The actual total BV does include stretch objectives
- The achievement percentage is the actual BV ÷ planned BV
- A team can achieve greater than 100 percent (as a result of stretch objectives achieved)
Individual team totals are rolled up into the program predictability measure (see Figure 10).
SAFe Team Self-Assessment
Agile teams continuously assess and improve their process. One such tool is a simple SAFe team practices assessment. When the team completes the spreadsheet, it will automatically produce a radar chart like the one shown in Figure 23, which highlights relative strengths and weaknesses.
 Leffingwell, Dean. Agile Software Requirements: Lean Requirements Practices for Teams, Programs, and the Enterprise. Addison-Wesley, 2011.
 Leffingwell, Dean. Scaling Software Agility: Best Practices for Large Enterprises. Addison-Wesley, 2007.
Last update: 17 November 2017