In order for you to keep up with customer demand, you need to create a deployment pipeline. You need to get everything in version control. You need to automate the entire environment creation process. You need a deployment pipeline where you can create test and production environments, and then deploy code into them, entirely on demand.
—Erik to Grasshopper, The Phoenix Project 
Real, tangible development value occurs only when the end users are successfully operating the Solution in their environment. This demands that the complex routine of deploying to operations, or moving to production, receives early and meaningful attention during development.
To ensure a faster flow of value to the end user, this article suggests mechanisms for tighter integration of development and deployment operations, typically referred to as DevOps. This is accomplished, in part, by integrating personnel from the operations/production team with the Agile Teams in the Agile Release Train (ART). There are also specific suggestions for continuously maintaining deployment readiness throughout the capability and feature development timeline. In turn, this gives the Enterprise the ability to deploy to production more frequently, and thereby lead its industry with the sustainably shortest lead time.
Deployment Operations Is Integral to the Value Stream
The goal of software and systems engineering is to deliver usable and reliable Solutions to the end users. Lean and Agile both emphasize the ability to do so more frequently and reliably. “Leaning” the development process helps development shops gradually establish faster development flow by systematically reducing development cycle time and introducing “built-in quality” approaches. However, in most cases, development teams still deliver solutions to deployment or production in large batches. There, the actual deployment and release of the new solution is likely to be manual, error prone, and unpredictable, which adversely affects release date commitments and delivered quality. In contrast, many shops have very effective Lean manufacturing facilities that are out of sync with the development side of the Value Stream.
To enable organizations to effectively deliver value to the business, a leaner approach must be applied, which incorporates smaller batch sizes and includes deployment readiness from Capability definition all the way through to the point where users actually benefit. Thereby we can move closer to continuous delivery, which is an end goal for many Enterprises. While this article is mainly focused on DevOps practices in the setting of internal and external IT software development, DevOps is more than just a set of practices and can be applied to the larger context of both software and systems development. More important, DevOps is also about changing the mind-sets of people to break down silos; obtain fast, useful feedback; and improve the flow of work and collaboration to bring about the best economic outcomes and develop products that delight Customers. Note: For more on this topic, see this video by Scott Prugh from the DevOps Enterprise Summit 2014.
Deployment Operations Must Be “on the Train”
In SAFe, value streams cover all steps of software value creation and delivery. The Agile Release Train, a self-organized team of Agile Teams, is designed to enable effective flow of value in a particular value stream via a steady stream of Releases. Some value streams can be realized by a single ART, while some require many ARTs to deliver the solution. In order to establish an effective deployment pipeline and process, it is crucial that the deployment operations or production team members are active on the train and fully engaged in the process. In multi-ART value streams, DevOps is part of the trains, as well as part of the solution team at the Value Stream Level.
DevOps typically includes system administrators, database administrators, operational engineers, and network and storage engineers. However, DevOps principles, values, and practices can also be extended to manufacturing experts and others—those who are traditionally responsible for deploying or manufacturing the solution and keeping it running. They must operate in the shared, real-time development and delivery context of the Agile Release Train. The Internet of Things (IoT) is also set to be the next wave of disruption to the market to those building and connecting things. This will also require new thinking as to how to deal with the integration of widely distributed software, hardware and systems, IT operations, and quality practices needed for rapid deployment. Being part of the Agile Release Train means actively participating in ART events—interacting with teams, System and Solution Architect-Engineering, and Business Owners. It also includes developing and maintaining a backlog of DevOps activities aligned with Program PI Objectives. This implies some level of participation of the deployment operations team in the activities of PI Planning, backlog refinement, Scrum of Scrums, System Demo, Solution Demo, and Inspect and Adapt. It is also important that the pipeline of deployable work is visualized, so that development teams and operations/production can work collaboratively to ensure the flow of value from the time the capabilities are conceived until they get actually deployed and used.
Six Recommended Practices for Building Your Deployment Pipeline
In order to establish an effective deployment pipeline—a continuous flow of new value from development to and through deployment—six specific practices are provided in the following sections. These are based on an understanding of a necessary, broader, and more automated environment, as Figure 1 illustrates.
We can see from Figure 1 that there are three main processes that must be supported:
- Automatically fetching all necessary artifacts from version control, including code, scripts, tests, metadata, and the like
- Automatically integrating, deploying, and validating the code in a staging environment and automating it as far as possible
- Deploying the validated build from staging to production and validating the new system
#1 – Build and Maintain a Production-Equivalent Staging Environment
Figure 1 illustrates a critical asset: a staging environment. The need for this environment is driven by the fact that most development environments are typically quite different from production. There, for example, the application server is behind a firewall, which is preceded by a load balancer; the much larger-scale production database is clustered; media content lives on separate servers; and more. During the handoff from development to production, Murphy’s Law will take effect: The deployment will fail and debugging and resolution will take an unpredictable amount of time. Instead, the company should build a staging environment that has the same or similar hardware and supporting systems as production. There, the program teams can continuously validate their software in preparation for deployment. While achieving true production equivalency may not become economically viable—it doesn’t makes sense to replicate the hundreds or thousands of servers required there—there are a variety of ways to achieve largely functional equivalence without such investment. For example, it may be sufficient to have only 2 instances of the application server instead of 20, and a cheaper load balancer from the same vendor—all that may be necessary to ensure that new functionality is validated in a load-balanced environment.
#2 – Maintain Development and Test Environments to Better Match Production
The above is a solution to a different root cause, which itself can also be mitigated, and that is that the various and multiple development environments required for development (especially distributed development, which is routine at scale) do not closely match production. Part of the reason for this is cost, which may be driven by practical economics (unencumbered by a lack of understanding of the true cost of delay of deployment). So, for example, while it may not be practical to have a separate load balancer for every developer, software configurations can typically be affordably replicated across all environments. This can be accomplished by propagating all changes in production (such as component or supporting systems upgrades, configurations, changes in system metadata, etc.) back to the other environments. This should also include new development-initiated configuration/environment changes that are made to production during each deployment. All configuration changes need to be captured in version control, and all new actions required to enable the deployment process should be documented in scripts and automated wherever possible.
#3 – Deploy to Staging Every Iteration; Deploy to Production Frequently
3a – Deploy to Staging Every Iteration. It is not possible to objectively understand the true state of any increment unless it is truly deployment ready. To better ensure this, one suggestion is a simple rule: Do all fortnightly system demos from the staging environment, including the final system and solution demos. In that way, deployability becomes part of Definition of Done for every user story, resulting in potentially deployable software every Iteration.
3b – Deploy to Production Frequently. And while continuous deployment readiness is critical to establishing a reliable delivery process, the real benefits in shortening lead time come from actually deploying to production more frequently. This also helps eliminate long-lived production support branches and the consequential extra effort required to merge and synchronize changes across all the instances. However, while Figure 1 shows a fully automated deployment model, we must also consider release governance before we allow an “automatic push” to production. This is typically under the authority of Release Management, whose considerations include:
- Conceptual integrity of the feature set, such that it meets a holistic set of user needs (see Releases)
- Evaluation of any quality, regulatory, security, or other such requirements
- Potential impacts to customers, channels, production, and any external market timing considerations
#4 – Put Everything Under Version Control
In order to be able to reliably deploy to production, all deployable artifacts, metadata, and other supporting configuration items must be maintained under version control. This includes the new code, all required data (dictionaries, scripts, look-ups, mappings, etc.), all libraries and external assemblies, configuration files or databases, application or database servers—everything that may realistically be updated or modified. This approach also applies to the test data, which has to be manageable enough for the teams to update every time they introduce a new test scenario or update an existing one. Keeping the test data under version control provides quick and reliable feedback on the latest build by allowing for repeatedly running the tests in a deterministic test environment.
#5 – Start Creating the Ability to Automatically Build Environments
Many deployment problems arise from the error-prone, manually intensive routines that are required to build the actual run-time environments. These include preparing the operating environment, applications, and data; configuring the artifacts; and initiating the required jobs in the system and its supporting systems. In order to establish a reliable deployment process, the environment setup process itself needs to be fully automated. This automation can be substantially facilitated by virtualization, using Infrastructure as a Service (IaaS) and applying special frameworks for automating the configuration management jobs.
#6 – Start Automating the Actual Deployment Process
Last, it should be obvious that the actual deployment process itself also requires automation. This includes all the steps in the flow, including building the code, creating the test environments, executing the automated tests, and deploying and validating verified code and associated systems and utilities in the target (development, staging, and production) environment. This final, critical automation step is achievable only via an incremental process, one that requires the organization’s full commitment and support, as well as creativity and pragmatism as the teams prioritize target areas for automation.
Deployability and Solution Architecture
A Lean, systems approach to continuous deployment readiness requires understanding that the code itself as well as configurations, integration scripts, automated tests, deployment scripts, metadata, and supporting systems are equally important pieces of the entire solution. Faster achievement of user and business goals can be achieved only when the whole solution context is considered. As with the other Nonfunctional Requirements (NFRs), designing for deployability requires intentional design effort via collaboration of System Architects, Agile Teams, and deployment operations toward the goal of a fast and flawless delivery process, requiring minimal manual effort. In addition, in order to support the adoption of more effective deployment techniques, the organization may need to undertake certain enterprise-level architectural initiatives (common technology stacks, tools, data preparation and normalization, third-party interfaces and data exchange, logging and reporting tools, etc.) that gradually enhance architecture, infrastructure, and other nonfunctional considerations in support of deployment readiness. Everything that jeopardizes or complicates the process of getting valuable software out the door must eventually be improved.
 Humble, Jez and David Farley. Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation. Addison-Wesley, 2010.
 Kim, Gene, et al. The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win. IT Revolution Press, 2013.
 Prugh, Scott. Continuous Delivery. /continuous-delivery/ by Prugh, on the same topic, from DevOps Enterprise Summit, 2014.
Last update: 1 April 2016