Built-In Quality

Build incrementally with fast, integrated learning cycles

—SAFe Principle#4

Built-in Quality Abstract

Built-in Quality is one of the four Core Values of SAFe. The Enterprise’s ability to deliver new functionality with the fastest sustainable lead time and to be able to react to rapidly changing business environments is dependent on Solution quality. But built-in quality is not unique to SAFe. Rather, it is a core principle of the Lean-Agile Mindset, where it helps avoid the cost of delays associated with recall, rework, and defect fixing. The Agile Manifesto is focused on quality as well: “Continuous attention to technical excellence and good design enhances agility” [1]. There can be no ambiguity about the value of built-in quality in large-scale systems. It is mandatory. To this end, this article describes quality practices for software, firmware, and hardware.

The software practices—many of which are inspired and described by eXtreme Programming (XP)—help Agile software teams ensure that the solutions they build are high quality and can readily adapt to change. The collaborative nature of these practices, along with a focus on frequent validation, create an emergent culture in which engineering and craftsmanship are key business enablers.

With respect to firmware and hardware, the goal is the same, but the physics and economics—and therefore the practices—are somewhat different. They include a more intense focus on modeling and simulation, as well as more exploratory early Iterations, more design verification, and more frequent system-level integration.


When Enterprises need to respond to change, software and systems built on good technical foundations are easier to change and adapt. For large systems this is even more critical, as the cumulative effect of even minor defects and wrong assumptions can create unacceptable consequences. Achieving high-quality systems is serious work that requires ongoing training and commitment, but the investment is warranted by many business benefits:

  • Higher customer satisfaction. Low defect-density products and services provide higher levels of Customer satisfaction, higher safety, better return on investment, and higher confidence for the business.
  • Predictability and integrity of the development organization. Without reliable control over Solution quality, it is impossible to plan new business functionality predictably, or to understand development velocity. This creates a lack of trust in the business, reduces individual pride of craftsmanship, and lowers morale.
  • Scalability. When the components clearly express design intent, exhibit simplicity, and have supporting integration and test infrastructure, adding more teams and functionality will result in more business value delivered. Otherwise, adding more developers and testers slows the enterprise down and creates unacceptably high variability in quality and Release commitments.
  • Velocity, agility, and system performance. Higher quality fosters better maintainability and enhances the ability to add new business functionality more quickly. It also enhances the team’s ability to maintain and enhance critical Nonfunctional Requirements, including performance, scalability, security, and reliability.
  • Ability to innovate. Innovation occurs at the confluence of smart, motivated people and the environment in which they operate. High quality creates a fertile technical environment where new ideas can be easily prototyped, tested, and demoed.

The sections below summarize recommended practices for achieving Built-in Quality of software, firmware, and hardware.


SAFe is built on a decades-long evolution of better engineering practices, driven in large part by the success of innovative Lean and Agile methods. When it comes to software engineering, XP practices have led the way [2]. In addition to the key role that Agile Architecture plays in assuring system-level quality, SAFe describes additional critical Lean-Agile software engineering practices that help teams achieve the highest-level quality. The sections below provide a summary of these practices, three of which—Continuous Integration, Test-First, and Refactoring—link to more in-depth articles on the subject.

Continuous Integration

Continuous Integration (CI) is the software practice of merging the code from each developer’s work space into a single main branch of code, multiple times per day. In that way, all developers on a program are constantly working with the latest version of the code, and errors and assumptions made among developers and among teams are discovered immediately. This helps avoid the risk of deferred system-level integration issues and their impact on system quality and program predictability. Continuous integration requires specialty infrastructure and tools, including build servers and automated test frameworks. But even more important is the cultural mandate that individuals and teams give themselves credit only for fully integrated and tested software.

In SAFe, teams perform local integration at least daily, but equally important is the responsibility to integrate the full system and run regression tests as necessary, to ensure that the system is evolving in the intended manner. Initially that may not be feasible every day, so a reasonable initial goal may be to achieve full system-level integration at least one or two times per Iteration. This provides the baseline needed for a successful biweekly System Demo.

The Continuous Integration companion article offers more about this topic, particularly with respect to system- and Solution-level CI.


Test-First is a philosophy that encourages teams to reason deeply about system behavior prior to implementing the code. This increases the productivity of coding and improves its fitness for use. It also creates a more comprehensive test strategy, whereby the understanding of system requirements is converted into a series of tests, typically prior to developing the code itself.

Test-first methods can be further divided into Test-Driven Development (TDD) and Acceptance Test-Driven Development (ATDD). Both are supported by test automation, which is required in support of continuous integration, team velocity, and development effectiveness.

In Test-Driven Development, developers write an automated unit test first, run the test to observe the failure, and then write the minimum code necessary to pass the test. Primarily applicable at the code method or unit (technical) level, this ensures that a test actually exists and tends to prevent gold plating and feature creep of writing code that is more complex than necessary to meet the stated test case. This also helps requirements persist as automated tests and remain up to date.

Acceptance Test-Driven Development is an advanced discipline in which the criteria for acceptable system behavior, as specified by the Product Owner, is converted to an automated acceptance test that can be run repeatedly to ensure continued conformance as the system evolves. ATDD helps keep the team focused on necessary system-level behavior that is observable to Customers, and it gives the teams clear success criteria.

The companion Test-First article offers more on this topic.


Refactoring “is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior” [3]. As Agile eschews “Big Up-Front Design” (BUFD) and “welcomes changing requirements, even late in development” [1], this implies that the current code base is amenable to change as new business functionality is required. In order to maintain this resiliency, teams continuously refactor code in a series of small steps to provide a solid foundation for future development. Neglecting refactoring in favor of “always new functionality” will quickly create a system that is rigid and unmaintainable, and ultimately it drives increasingly poor economic outcomes.

Refactoring is a key enabler of emergent design and is a necessary and integral part of Agile. More guidance on refactoring is presented in the companion Refactoring article.

Pair Work

As opposed to Pair Work, pair programming—a mandatory practice in XP—involves two programmers working at a single workstation. One person codes while the observer reviews, inspects, and adds value to the coding process. These roles are switched throughout a paired session. Before a line of code is written, the pair discusses what needs to be done and gains a shared understanding of the requirements, design, and how to test the functionality. The observer inspects the work, makes suggestions about how to improve the code, and points out potential errors. Peer review is done in real time and is integral.

Pair programming is integral to XP, but it is also controversial—some people believe that pairing increases costs because two people are working on the same lines of code. And there can be interpersonal and cultural challenges as well.

For many, pair programming is too extreme. However, in practice, there are a number of different styles of pairing work done by Agile Teams. Each has value on its own, and many can be used in combination. Some teams follow pair programming standards for all code development, as prescribed by XP. Others pair developers and testers on a Story; each reviews the other’s work as the story moves to completion. Still others prefer more spontaneous pairing, with developers pairing for critical code segments, refactoring of legacy code, development of interface definition, and system-level integration challenges. Pair work is also a great foundation for refactoring, CI, test automation, and collective ownership.

Collective Ownership

Collective Ownership “encourages everyone to contribute new ideas to all segments of the project. Any developer can change any line of code to add functionality, fix bugs, improve designs, or refactor. No one person becomes a bottleneck for changes” [4]. Collective ownership is particularly critical in SAFe, as big systems have big code bases, and it’s less likely that the original developer is still on the team or program. Even when that person is on the team, waiting for one individual to make a change is a hand-off, and a certain delay.

Implementing collective ownership at scale requires following proven, agreed-to coding standards across the program; embracing simplicity in design; revealing intent in code constructs; and building interfaces that are resilient to changing system behaviors.

Firmware and Hardware

But coding aside, no one can scale low-quality components or systems, either. Hardware elements—electronics, electrical, fluidics, optics, mechanical, packaging, thermal, and many more—are a lot less “soft”; they are more difficult and expensive to change. As illustrated in Figure 1 [5], errors and unproven assumptions in firmware and hardware can introduce a much higher cost of change and rework over time.

Figure 1. Relative cost of change over time for software, hardware and firmware
Figure 1. Relative cost of change over time for software, firmware, and hardware

As described in the sections below, this higher cost of later change drives Lean developers of cyber-physical systems to a number of practices that assure built-in quality during solution development.

Exploratory Early Iterations

As highlighted in reference [6], even when building a Harley-Davidson—where it takes a fairly complete team to road test a new motorcycle—more frequent design cycles are the systems builders’ tool to accelerate knowledge and reduce the risk and cost of errors discovered later. In this respect, SAFe treats firmware and hardware in the same way it does software; the cadence of iterations and Program Increments, and the expectations, are largely the same. However, the early iterations are particularly critical because the cost of change has not yet reached the higher levels. Lean systems builders apply “softer” tool systems to iterate more quickly, including:

Frequent System-Level Integration and Testing

Continuous integration is a worthy and achievable goal for many software solutions. However, it is difficult to apply or even envision in cyber-physical systems, as many physical components—molds, mechanisms, fabricated parts, and more—evolve more slowly and cannot be integrated and evaluated every day. But that cannot be an excuse for late and problematic integration.

After all, eventually all the different components and subsystems—software, firmware, hardware, and everything else—must collaborate to provide effective solution-level behaviors. And the sooner, the better. To this end, Lean systems builders try for early and frequent integration and testing of subsystems and systems. This typically requires an increased investment in development, staging, and test infrastructure.

Design Verification

However, integration and testing alone is not enough. First, due to the dependencies of availability of various components of the system, it can occur too late in the process; and second, it cannot possibly evaluate all possible usage and failure scenarios. To address this, systems builders perform design verification to ensure that a design meets the Solution Intent. Design verification can include steps such as specification and analysis of requirements between subsystems; worst-case analysis of tolerances and performance; failure mode and effects analysis; traceability; analysis of thermal, environmental, and other deployed system-level aspects; and more.

While many of these practices are important in the software domain as well, the cost of change is lower there, so not as much needs to be known in advance of commitment to a design option. With respect to firmware and hardware, however, more up-front design considerations and specifications work is warranted.

Learn More

[1] Manifesto for Agile Software Development. www.AgileManifesto.org.

[2] Beck, Kent, and Cynthia Andres. Extreme Programming Explained: Embrace Change. Addison-Wesley, 2004.

[3] Fowler, Martin. Refactoring.com.

[4] http://www.extremeprogramming.org/rules/collective.html.

[5] Rubin, Ken. http://www.innolution.com/blog/agile-in-a-hardware-firmware-environment-draw-the-cost-of-change-curve.

[6] Oosterwal, Dantar P. The Lean Machine: How Harley-Davidson Drove Top-Line Growth and Profitability with Revolutionary Lean Product Development. Amacom, 2010.


Last update: 9 December 2015