Inspection does not improve the quality, nor guarantee quality. Inspection is too late. The quality, good or bad, is already in the product. Quality cannot be inspected into a product or service; it must be built into it.”
—W. Edwards Deming
Built-In Quality practices ensure that each Solution element, at every increment, meets appropriate quality standards throughout development.
The enterprise’s ability to deliver new functionality with the shortest sustainable lead time and adapt to rapidly changing business environments depends on solution quality. So, it should be no surprise that built-in quality is one of the SAFe Core Values. But built-in quality is not unique to SAFe. Rather, it is a core principle of the Lean-Agile Mindset, where it helps avoid the cost of delays associated with the recall, rework, and defect fixing. The built-in quality philosophy applies systems thinking to optimize the system as a whole, ensuring a fast flow across the entire Value Stream, and makes quality everyone’s job.
The Agile Manifesto is focused on quality as well: “Continuous attention to technical excellence and good design enhances agility.” . Many of the practices in this article are inspired and described by Extreme Programming (XP). These practices help Agile teams ensure that the Solutions they build are high in quality, can readily adapt to change, and are architected for testing, deployment, and recovery. The collaborative nature of these practices, along with a focus on frequent validation, creates an emergent culture in which engineering and craftsmanship are key business enablers.
Software, hardware, and firmware all share the same goals and principles of built-in quality. However, the work physics and economics are somewhat different for hardware and firmware, requiring a different set of practices. They include a more intense focus on modeling and simulation, as well as more exploratory early iterations, more design verification, and frequent system-level integration.
When Enterprises need to respond to change, software and systems built on good technical foundations are easier to change and adapt. For a large solution, this is even more critical, as the cumulative effect of even minor defects and wrong assumptions can create unacceptable consequences.
Achieving high-quality systems is serious work that requires ongoing training and commitment, but the investment is warranted by its many business benefits:
- Higher customer satisfaction
- Improved velocity and delivery predictability
- Better system performance
- Improved ability to innovate, scale, and meet compliance requirements
The following sections summarize the recommended practices for achieving built-in quality for software, and hardware and firmware.
Software and systems built with high quality are easier to modify and adapt when an enterprise must rapidly respond to change. Many of the practices inspired by XP, along with a focus on frequent validation, create an emergent culture in which engineering and craftsmanship are key business enablers. These practices include:
- Continuous Integration (CI) – Is the practice of merging the code from each developer’s workspace into a single main branch of code, multiple times per day. This lessens the risk of deferred integration issues and their impact on system quality and program predictability. Teams perform local integration at least daily. But to confirm that the work is progressing as intended, full system-level integration should be achieved at least one or two times per iteration.
- Test-First – Is a set of practices that encourage teams to think deeply about intended system behavior, before implementing the code. Test-first methods can be further divided into two categories: 1) Test-Driven Development (TDD), where developers write an automated unit test first, run the test to observe the failure, and then write the minimum code necessary to pass the test, and 2) Acceptance Test Driven Development (ATDD), where Story and Feature acceptance criteria are expressed as automated acceptance tests, which can be run continuously to ensure continued validation as the system evolves.
- Refactoring – This is a “disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior.”  Refactoring is a key enabler of emergent design and agility. To maintain system robustness, teams continuously refactor code in a series of small steps, providing a solid foundation for future development.
- Pair work – Agile teams can pair using a variety of techniques, many of which can be used in combination. Some teams use pair programming for all code development, as prescribed by XP. Others pair developers and testers on a story; each reviews the other’s work as the story moves to completion. Still, others pair when needed, with developers pairing for critical code segments, refactoring of legacy code, development of interface definition, and system-level integration challenges. Pair work is also a great foundation for refactoring, CI, test automation, and collective ownership. It also reduces or eliminates the need for post-implementation code reviews and rework.
- Collective ownership – This practice invites a shared understanding and responsibility for the solution by all team members. “Any developer can change any line of code to add functionality, fix bugs, improve designs, or refactor.”  This becomes critical for solutions that have large code bases, especially if only the original developer can make changes. Indeed, this has a similar effect to relying upon specialists, and delays will rule the day. Without collective ownership, there is no collective knowledge, resulting in a higher risk to maintain or enhance the solution.
- Agile Architecture – By balancing emergent design with intentional architecture, Agile architecture practices enable incremental value delivery. This avoids Big Design Upfront (BDUF) and the ‘start-stop-start’ nature of a phase-gated approach. Creating Architectural Runway is one of the primary tools for implementing Agile architecture. This runway consists of the code, components, and technical infrastructure necessary to support the implementation of prioritized, near-term features, without excessive redesign and delay.
Hardware and Firmware Practices
When building hardware and firmware, errors and unproven assumptions can introduce a much higher cost of change and rework over time, as compared to software, as is illustrated in Figure 1.
The cadence of iterations, Program Increments (PIs), and quality expectations are largely the same for hardware, firmware, and software. However, the early iterations are particularly critical for hardware and firmware, because the cost of change has not yet reached higher levels. Therefore, developers of complex systems, with hardware and firmware, use additional practices to ensure quality is built-in during solution development:
- Model-Based Systems Engineering (MBSE) – By applying modeling and tools to the requirements, design, analysis, and verification activities in solution development, model-based systems engineering offers a cost-effective way to learn about system characteristics prior to and during development. It helps manage the complexity and cost of documentation for large systems.
- Set-Based Design (SBD) – The practice of set-based design maintains multiple requirements and design options for a longer period in the development cycle. Empirical data is used to narrow the focus based on the emergent knowledge gained.
- Frequent system integration – For many software solutions, continuous integration is an achievable goal. However, systems with physical components—molds, printed circuit boards, fabricated parts, and so on—evolve more slowly and can’t be integrated and evaluated every day. But that can’t be an excuse for late and problematic integration. That is why early and frequent integration of components and subsystems complex is critical, especially for embedded systems and complex solutions.
- Exploratory Early Iterations – As highlighted in reference , even when building a Harley-Davidson—where it takes a fairly complete team to road test a new motorcycle—more frequent design cycles accelerate knowledge and reduce the risk and cost of errors discovered later. In this respect, SAFe treats firmware and hardware in the same way it does software; the cadence of iterations and Program Increments, and the expectations for quality are largely the same. The exception is that the early iterations are particularly critical for hardware and firmware because the cost of change has not yet reached higher levels.
- Design verification – Even frequent integration is not enough. First, it may occur too late in the process because of the dependencies and the availability of various system components. Second, it can’t predict and evaluate all potential usage and failure scenarios. To address this, developers of high-assurance systems perform design verification to ensure that a design meets the Solution Intent. This may include specification and analysis of requirements between subsystems, worst-case analysis of tolerances and performance, Failure Mode Effects Analysis (FMEA), modeling and simulation, full verification and validation, and traceability.
The Role of DevOps and the Continuous Delivery Pipeline
From planning through delivery, the DevOps mindset and practices increase the collaboration between development and IT operations by building and automating a Continuous Delivery Pipeline. This shift in culture, along with automation, increases the frequency and quality of deployments. In SAFe, this pipeline consists of four elements: Continuous Exploration, Continuous Integration, Continuous Deployment, and Release on Demand.
DevOps simply recognizes that manual processes are the enemy of fast value delivery, high quality, improved productivity, and safety. Automation also enables the fast creation of repeatable development and test environments and processes, which are self-documenting and, therefore, easier to understand, improve, secure, and audit. The entire continuous delivery pipeline is automated to achieve a fast, Lean flow of value with the highest possible quality.
The Role of the Quality Management System
Compliance refers to the strategy, activities, and artifacts that allow teams to apply Lean-Agile development methods to build systems that have the highest possible quality while ensuring they meet any regulatory, industry, or other relevant standards.
To satisfy compliance requirements, organizations must prove that their system meets its intended purpose and has no unintended, harmful consequences. Further, objective evidence must demonstrate that the system conforms to those standards. A Lean Quality Management System (QMS)—which defines approved practices, policies, and procedures—is used to build high-assurance systems. These systems are intended to ensure that development activities and outcomes comply with all relevant regulations and quality standards, as well as providing the required documentation to prove it.
Please refer to the Compliance article for more information.
Learn More Manifesto for Agile Software Development. www.AgileManifesto.org.  Martin Fowler, Refactoring: Improving the Design of Existing Code, Addison-Wesley Professional, 1999.  www.extremeprogramming.org/rules/collective.html  www.innolution.com/blog/agile-in-a-hardware-firmware-environment-draw-the-cost-of-change-curve  Oosterwal, Dantar P. The Lean Machine: How Harley-Davidson Drove Top-Line Growth and Profitability with Revolutionary Lean Product Development. Amacom, 2010.
Last updated: 23 October, 2017