All models are wrong, but some are useful.
—George E. P. Box
Model-Based Systems Engineering
Model Based Systems Engineering (MBSE) is the practice of developing a set of related system models that help define, design and document a system under development. These models provide an efficient way to explore, update, and communicate system aspects to stakeholders, while significantly reducing or eliminating dependence on traditional documents.
MBSE is the application of modeling requirements, design, analysis, and verification activities as a cost-effective way to explore and document system characteristics. By testing and validating system characteristics early, models facilitate timely learning of properties and behaviors, enabling fast feedback on requirements and design decisions.
Although models are never a perfect representation of a system, they provide knowledge and feedback sooner and more cost effectively than implementation alone. In practice, engineers use models to gain knowledge (e.g., performance), and to serve as a guide for system implementation (e.g., SysML, UML). In some cases, they use them to directly build the actual implementation (e.g., electrical CAD, mechanical CAD).
Lean practices support fast learning through a continuous flow of small pieces of development work to gain fast feedback on decisions. Model-Based Systems Engineering (MBSE) is a discipline and a Lean tool that allows engineers to quickly and incrementally learn about the system under development before the cost of change gets too high. Models are used to explore the structure and behavior of system elements, evaluate design alternatives, and validate assumptions faster and as early as possible in the system lifecycle. This is particularly useful for large and complex systems—satellites, aircraft, medical systems and the like—where the solution must be proven practical beyond all possible doubt before, for example, launching into space, or connecting to the first patient. Models also record and communicate decisions that will be useful to others. This information serves as documentation for Compliance, impact analysis, reference by future solution builders, and other needs. In SAFe, model information is recorded as part of the Solution Intent, most often created by the work of Enablers.
The following sections provide guidance on adopting MBSE.
Explore Alternatives and Learn Faster
MBSE supports fast learning cycles (see SAFe Principle #4 – Build incrementally with fast, integrated learning cycles) and helps mitigate risks early in the product life cycle. Models facilitate early learning by testing and validating specific system characteristics, properties, or behaviors, enabling fast feedback on design decisions.
Dynamic, solid, graphs, equations, simulation, and prototypes—models come in many forms. As Figure 2 illustrates, each provides a different perspective into one or more system characteristics that enable the creation of future Capabilities and Features.
Models may predict performance (response time, reliability) or physical properties (heat, radiation, strength). Or they may explore design alternatives for user experience or response to an external stimulus. But, models don’t only explore technical design alternatives. The practices of design thinking and user-centered design are synergistic with MBSE, and help validate assumptions sooner.
Support Compliance and Impact Analysis
Historically, decisions on requirements, designs, tests, interfaces, allocations, etc., have been maintained in a variety of sources, including documents, spreadsheets, domain-specific tools, and sometimes even on paper. MBSE takes a holistic, system approach to managing system information and data relationships, treating all information as a model. Figure 3 shows a generic structure linking information from multiple types of models.
To quickly and reliably understand the impact of a requirements change on the system, or the impact of a change at the domain level on other parts of the system and requirements, system builders use model traceability. For example, Teams and System Architect/Engineers use model information to support SAFe’s Epic review process. Traceability also provides the objective evidence needed to address many regulatory and contractual compliance concerns.
A Lean, continuous-change environment amplifies the need for related models. While manual solutions to manage related information for coverage and compliance may suffice in a stage-gate process, they will be quickly overwhelmed in an Agile environment that encourages frequent and constant change.
Although SAFe prefers that system builders use models to validate assumptions early and to record system decisions, many product domains require documents for regulatory compliance (FAA, FDA, etc.) or contractual obligations (CDRLs in government contracting). Therefore, SAFe recommends generating documents from the information in system models. They act as a single source of truth for system decisions, and ensure consistency among many documents. Also, models can create documents targeting different stakeholders who may have individual system perspectives, or who only have access to view select portions of the system information (e.g., Suppliers).
While all products and programs will likely require formal documents, System Engineers are encouraged to work with Customers and/or regulatory agencies on the minimum set sufficient to meet their obligations. The source of most, if not all, of the information resides in engineering models that can and should be used, where possible, for inspections and formal reviews.
Build Model Quality In
Due to the diversity and number of people contributing information, models can suffer a challenge: Continuous changes made by many people can cause a drop in quality without proper oversight. The system architect/engineer works with teams to define quality practices—model standards and model testing—and to ensure that they are followed.
The quality practices discussed below facilitate early learning cycles. As SAFe notes, “You can’t scale crappy code,” and the same is true for system models. Quality practices and strong version management allow engineers to confidently and frequently make model changes and contribute to the system intent.
Model standards help control quality and guide teams on how best to model. They may include:
- What information should be captured
- Modeling notations (e.g., SysML) and parts of those notations (e.g., Use Case) to use or exclude
- Where modeling information should be placed for solution and subsystem elements
- Meta-information that should be stored with different types of model elements
- Links within the model or with other cross-discipline models
- Common types and dimensions used across the system
- Modeling tool properties and configuration
- Collaboration practices with any underlying version control system(s) (if relevant)
If documents are being generated from the models, the document templates should be defined early, as they will influence many of these decisions. Systems designers need to know where to store the model elements and any metadata or links that may be used for queries, document generation, or compliance.
As a best practice, create an exemplar early on—a high-level, full-system model. Ideally, the example is a skeleton model across the solution (one that defines all subsystems). Here, one element can illustrate how to model the structure and behavior of a single subsystem. Run any document generation tests against this configuration early to ensure it envisions the whole system, thus avoiding wasteful rework due to missed information.
Create Testable and Executable Models
SAFe’s Test-First practices help teams build quality into their products early, facilitating the continuous small changes we find in Agile software development. Test-first creates a rich suite of cases that allow developers to more reliably make changes without causing errors elsewhere in the system. Rich, automated tests are critical to the movement toward a Continuous Delivery Pipeline.
Lean practices encourage testable, executable models (when feasible) to reduce the waste associated with downstream errors. Models should be testable against whatever assessment criteria exist for the domain or discipline:
- Mechanical models test for physical and environmental issues.
- Electrical models test for logic.
- Software models test for anomalies.
- Executable system models test for system behavior.
Most tools provide the ability to check models or to create scripts that can iterate across the models and look for anomalies.
Testing Requirements Models. Textual requirements are used in almost every system and, under the current practice, are typically reviewed manually. The community has long sought executable specifications, where requirements are itemized in a format that’s test-ready. Success, however, has been limited to specific domains where the problem is well defined. Agile practices advocate an option called Acceptance Test-Driven Development (ATDD). Some exploratory work has emerged to support ATDD in systems development. While promising for testing requirements, ATDD has not yet been used at a large scale. Automate where possible, and make requirements and tests one and the same.
Testing Analysis and Design Models. Designs represented in models can be tested both statically and dynamically. Modeling tools typically have static analyzers or “checkers” that look for anomalies. Teams may add their own rules—model organization, modeling conventions and standards, required meta-information (e.g., SysML tags, stereotypes), etc. If analyzers don’t exist, scripts can iterate over the models to look for violations in the static model.
Models can also be tested dynamically. The models from engineering disciplines have their own solutions for assessing quality and should be leveraged as part of testing practice.
Testing Traceability. To ensure proper queries, document generation, and compliance, models must comply with the linking structure.
Document Generation. While possibly redundant with the traceability scripts above, document generation may have scripts to ensure that the model is structured properly and that all data exists to support all document templates. With large models, it’s often easier to debug a script than a document template.
Last update: 19 June, 2017