Inspection does not improve the quality, nor guarantee quality. Inspection is too late. The quality, good or bad, is already in the product. Quality cannot be inspected into a product or service; it must be built into it.”

—W. Edwards Deming

 

 

Built-In Quality

Find a Course:


Built-In Quality practices ensure that each Solution element, at every increment, meets appropriate quality standards throughout development.

The enterprise’s ability to deliver new functionality with the shortest sustainable lead time, and adapt to rapidly changing business environments, depends on solution quality. So, it should be no surprise that built-in quality is one of the SAFe Core Values as well as a principle of the Agile Manifesto, “Continuous attention to technical excellence and good design enhances agility” [1]. Built-in quality is also a core principle of the Lean-Agile Mindset, helping to avoid the cost of delays (CoDs) associated with recalls, rework, and fixing defects. SAFe’s built-in quality philosophy applies systems thinking to optimize the whole system, ensuring a fast flow across the entire Value Stream, and makes quality everyone’s job.

Software and hardware share the goals and principles of built-in quality. However, the work physics and economics are somewhat different for hardware. This article will discuss both.

Details

Enterprises must continually respond to market changes. Software and systems built on stable technical foundations are easier to change and adapt. This is even more critical for large solutions, as the cumulative effect of even minor defects and wrong assumptions can create unacceptable consequences.

Building high-quality systems is serious work that requires ongoing training and commitment, but the business benefits warrant the investment:

  • Higher customer satisfaction
  • Improved velocity and delivery predictability
  • Better system performance
  • Improved ability to innovate, scale, and meet compliance requirements

Figure 1 illustrates five dimensions of built-in quality. The first, FLow speaks to the fact that built-in quality is mandatory to achieve a state of continuous value flow. The other four describe quality as it applies to the system itself. Each dimension is discussed below.

Figure 1. Five dimensions of built-in quality

Achieving Flow with a Continuous Delivery Pipeline and Test-First

Built-in quality enables the SAFe Continuous Delivery Pipeline and the ability to Release on Demand. Figure 2 illustrates the continuous integration portion of SAFe’s DevOps Radar and shows how changes built into components are tested across multiple environments before arriving in production. ‘Test doubles’ speed testing by substituting slow or expensive components (e.g., enterprise database) with faster proxies (e.g., in-memory database proxy).

Figure 2. A continuous delivery pipeline ensure quality before delivery into production

To support the pipeline’s testing needs, the organization must shift testing left earlier in the development cycle. As part of its specification, every system behavior—Story, Feature, Capability, Non-Functional Requirement (NFR)—includes automated tests. This Test-First approach allows teams to apply quality practices early and continuously, as well as create a rich set of tests for the pipeline.

The remainder of this article discusses the other four dimensions of built-in quality—architecture and design, code, system, and release.

Achieving Architecture and Design Quality

Built-in quality begins with architecture and design, which ultimately determines how well a system can support current and future business needs. Quality in architecture and design make future requirements easier to implement, systems easier to test, and helps to satisfy NFRs.

Support Future Business Needs

As requirements evolve based on market changes, development discoveries, and other reasons, architectures and designs must also evolve.  Traditional processes force early decisions that often result in two poor choices, either rework to change the decision or live with the inefficiencies of a suboptimal decision. Identifying the best decision requires knowledge gained through experimentation, modeling, simulations, prototyping, and other learning activities. It also requires a Set-Based Design approach that evaluates multiple alternatives to arrive at the best decision. Once determined, developers use the Architecture Runway to implement the final decision.  Agile Architecture provides the intentional guidance for inter-team design and implementation synchronization.

Several design characteristics are predictors of good quality, ensuring that artifacts are understandable and maintainable including high cohesion and low coupling, good abstraction and encapsulation, readability, and separation of concerns. The SOLID [2] principles create designs with these characteristics.

Architecting and Designing to Ease Testing

Architecture and design also determine a system’s testability. Modular components that communicate through well-defined interfaces contain seams [3] that allow testers and developers to substitute expensive or slow components with test doubles.  As an example, Figure 3 shows a Speed Controller component needing the current vehicle location from a GPS Location component to adjust its speed. Testing the Speed Controller with GPS Location requires the associated GPS hardware and signal generators to replicate GPS satellites. Replacing that complexity with a test double decreases the time and effort to develop and test the Speed Controller, or any other component that interfaces with GPS Location.

Figure 3. Modular components create seams to simplify testing

Applying Design Quality in Cyber-Physical Systems

These design principles also apply to cyber-physical systems. Engineers in many disciplines use modeling and simulation to gain design knowledge. For example, integrated circuit (IC) design technologies (VHDL, Verilog) are software-like and share the same benefits from these design characteristics and SOLID principles [4]. Hardware designs also apply the notion of test doubles through simulations and models or they provide a wood prototype before cutting metal.

This often requires a mindset change. Like software, hardware will also change. Instead of optimizing to complete a design for the current need, planning for future changes by building in quality provides better long-term outcomes.

Achieving Code Quality

All system capabilities are ultimately executed by the code (or components) of a system. The speed and ease with which new capabilities can be added depends on how quickly and reliably developers can modify it. Inspired in part by Extreme Programming (XP) [5], several practices are listed here.

Unit Testing and Test-Driven Development

The unit testing practice breaks the code into parts and ensures that each part has automated tests to exercise it. These tests run automatically after each change and allow developers to make fast changes, confident that the modification won’t break another part of the system. Tests also serve as documentation and are executable examples of interactions with a component’s interface to show how that component should be used.

Test-Driven Development (TDD) guides the creation of unit tests by specifying the test for a change before creating it. This forces developers to think more broadly about the problem, including the edge cases and boundary conditions before implementation. Better understanding results in faster development with fewer errors and less rework.

Pair Work

Pairing has two developers work the same change at the same workstation. One serves as the driver writing the code while the other as the navigator providing real-time review and feedback. Developers switch roles frequently. Pairing creates and maintains quality as the code will contain the shared knowledge, perspectives, and best practices from each member. It also raises and broadens the skillset for the entire team as teammates learn from each other.

Collective Ownership and Coding Standards

Collective ownership reduces dependencies between teams and ensures that any one developer or team will not block the fast flow of value delivery. As part of making a change, “any developer can change any line of code to add functionality, fix bugs, improve designs, or refactor.” [6] Because the code is not owned by one team or individual, supporting coding standards encourages consistency so that everyone can understand and maintain the quality of each component.

Applying Code Quality in Cyber-Physical Systems

While not all hardware design has ‘code,’ the creation of physical artifacts is a collaborative process and can benefit from these practices. Computer Aided Design (CAD) tools used in hardware development provide ‘unit tests’ in the form of assertions for electronic design and then simulations and analysis in mechanical design. Paring, collective ownership, and coding standards can produce similar benefits to create designs that are more easily maintained and modified.

Some hardware design technologies are very similar to code (e.g., VHDL) with, clearly defined inputs and outputs that are ideal for practices like TDD [4].

Achieving System Quality

While code and design quality ensure that system artifacts can be easily understood and changed, system quality confirms that the systems work as expected and that everyone is aligned on what changes to make. Tips for achieving system quality are highlighted below.

Create Alignment to Achieve Fast Flow

Alignment and shared understanding reduce developer delays and rework, enabling fast flow. Behavior-Driven Development (BDD) defines a collaborative practice where the Product Owner and Dev Team agree on the precise behavior for a story or feature. Applying BDD helps developers build the right behavior the first time and reduces rework and errors. Model-Based Systems Engineering (MBSE) scales this alignment to the whole system. Through an analysis and synthesis process, MBSE provides a high-level, complete view of all the proposed functionality for a system, and how the system design realizes it.

Continuously Integrate the End-to-End Solution

As shown in Figure 2, continuous integration (CI) and continuous deployment (CD) provides developers with fast-feedback on changes. Each change is built quickly and then integrated and tested at the multiple levels, including in the deployment environment.  CI/CD automates the process to move changes through their stages and respond when a test fails. Quality tests for NFRs are also automated. While CI/CD strive to automate all tests, some functional (exploratory) and NFR (usability) testing can only be performed manually.

Applying System Quality in Cyber-Physical Systems

Cyber-physical systems can also support a fast-flow, CI/CD approach—even with a long lead time for physical parts. As stated earlier, simulations, models, previous hardware versions, and other proxies can substitute for the final system components. Figure 4 illustrates a system team providing a demonstrable platform to test incremental behavior by connecting these component proxies. As each component matures (shown by increasing redness), the end-to-end integration platform matures as well. With this approach, component teams become responsible for supporting both their part of the final solution as well as maturing the incremental, end-to-end testing platform.

Figure 4. Continuous integration (CI) for cyber-physical systems

Achieving Release Quality

Releasing allows the business to measure the effectiveness of a Feature’s benefit hypothesis. The faster an organization releases, the faster it learns and the more value it delivers. Modular architectures that define standard interfaces between components allow smaller, component-level changes to be released independently. Smaller changes allow faster, more frequent, less risky releases, but require an automated pipeline (shown in Figure 2) to ensure quality.

Unlike a traditional server infrastructure, an ‘immutable infrastructure’ does not allow changes to made manually and directly to production servers. Instead, changes are applied to server images, validated, and then launched to replace currently running servers. This approach creates more consistent, predictable releases. It also allows automated recovery. If the operational environment detects a production error, it can roll back the release by simply launching the previous image to replace the erroneous one.

Supporting Compliance

For systems that must demonstrate objective evidence for compliance or audit, releasing has additional conditions. These organizations must prove that the system meets its intended purpose and has no unintended, harmful consequences. As described in the Compliance article, Lean Quality Management System (QMS) defines approved practices, policies, and procedures that support a Lean-Agile, flow-based, continuous integrate-deploy-release process.

Scalable Definition of Done

Definition of Done is an important way of ensuring increment of value can be considered complete. The continuous development of incremental system functionality requires a scaled definition of done to ensure the right work is done at the right time, some early and some only for release. An example is shown in Table 1, but each team, train, and enterprise should build their own definition. While these may be different for each ART or team, they usually share a core set of items.

Table 1. Example SAFe scalable Definition of Done

Achieving Release Quality in Cyber-Physical Systems

The sentiment for release quality is not to let changes lay idle, waiting to be integrated. Instead, integrate changes quickly and frequently through successively larger portions of the systems until the change arrives in an environment for validation. Some cyber-physical systems may validate in the customer environment (e.g., over-the-air updates in vehicles). Others proxy that environment with one or more mockups that strive to gain early feedback, as shown previously in Figure 4. The end-to-end platform matures over time, providing higher levels of fidelity that enable earlier verification and validation (V&V) as well as compliance efforts. For many systems, early V&V and compliance feedback are critical to understanding the ability to release.

 


Learn More

[1] Manifesto for Agile Software Development. www.AgileManifesto.org.

[2] Robert Martin, The Principles of OOD, http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod

[3] Feathers, Michael, Testing Effectively With Legacy Code, Prentice Hall, 2005.

[4] Jasinski, Riocardo, Effective Coding with VHDL, Principles and Best Practice, The MIT Press, 2016.

[5] Beck, Kent, Extreme Programming Explained: Embrace Change, Addison-Wesley, 1999.

[6] www.extremeprogramming.org/rules/collective.html

Last updated: 4 October 2018