MBSE

All models are wrong, but some are useful.

—George E. P. Box

Model-Based Systems Engineering Abstract

Since humans began engineering systems and structures, they have used models to abstract, learn, and communicate. Model-Based Systems Engineering (MBSE) is the application of modeling of requirements, design, analysis, and verification activities as a cost-effective way to learn about system characteristics prior to construction. Models provide early feedback on system decisions and communicate intended aspects of the system to all involved, including Customers, stakeholders, developers, testers, etc.

While, as Dr. Box asserts, models are never a perfect representation of the real world, they provide knowledge and feedback earlier and more cheaply than could possibly be achieved through implementation. In practice, engineers use models to gain knowledge (e.g., performance, thermal, fluid), to serve as a guide for system implementation (e.g., SysML, UML) or, in some cases, to directly build the actual implementation (e.g., electrical CAD, mechanical CAD).

Models facilitate early learning by testing and validating system characteristics, properties, and behaviors early, enabling early feedback on requirements and design decisions.

Details

Engineering disciplines have used modeling for years to discover and specify structure and behavior, explore alternatives, and communicate decisions early in the life cycle. Lean practices support fast learning through a continuous flow of small work to gain fast feedback on decisions. Model-Based Systems Engineering (MBSE) is a discipline and a Lean tool that allows engineers to quickly and incrementally learn about the system under development before the cost of change gets too high.

Support Learning Cycles with Models

MBSE supports fast learning cycles and helps mitigate risks early in the product life cycle. Models facilitate early learning by testing and validating system characteristics, properties, and behaviors early, enabling early feedback on design decisions.

Models come in many forms—dynamic, solid, graphs, equations, simulation, and prototypes. As Figure 1 illustrates, each provides a different perspective into one or more system characteristics that enable the creation of Capabilities and Features.

Figure 1. Models and learning cycles
Figure 1. Models and learning cycles

Models may predict performance (response time, reliability) or physical properties (heat, radiation, strength), or they may explore designs for user experience or response to an external stimulus.

Use Traceability for Impact Analysis and Compliance

Relationships among models can help further predict system utility and support analysis and compliance activities. Systems may have models from many disciplines—electrical, mechanical, software—as well as system-level requirements and design models. Traceability is achieved by relating system elements within and across models. It provides many benefits:

  • Simplifies and automates most regulatory and contractual compliance needs
  • Reduces time and effort for impact analysis needed to support new work and provides information for Inspect and Adapt activities
  • Facilitates cross-discipline collaboration by allowing traceability from a model in one discipline to a model in another
  • Encourages general knowledge discovery by making information, and related cross-discipline information, more accessible to teams

A Lean, continuous change environment amplifies the need for related models. While manual solutions to managing coverage and compliance may work adequately in a stage-gate process, they will be quickly overwhelmed by an Agile environment that encourages frequent and continuous change.

Figure 2 shows a high-level model link structure in which requirements are linked to tests in a test model and to system-level elements in a system model.

Figure 2. Linking cross-domain models
Figure 2. Linking cross-domain models

Domain models then link to the system model. Engineers can quickly and reliably understand the impact of a requirements change on the system and the domains, or the impact of a change at the domain level on other parts of the system and requirements. In addition, the existence or absence of links shows coverage and compliance gaps (e.g., requirements not tested).

Instead of manually creating and maintaining this traceability information, compliance documents can be generated from model information and the links between them. When the system requirements or designs change, traceability information shows the impact of a change, any gaps in test/requirements coverage, and any linked information that is now suspect and should be revisited.

Record Models in Solution Intent

Technical knowledge and decisions are recorded in Solution Intent for communication with and reference by team members across the entire Solution development. In practice, engineers represent technical information in many forms, from documents to spreadsheets to modeling tools. While documents and spreadsheets are easy for an author to create, they do not encourage the knowledge transfer or the continuous collaboration required for Lean systems engineering. For that, what’s needed is MBSE.

Solution intent will contain many different kinds of models, with many options for organizing and linking information. System and Solution Architects and Engineers are typically responsible for tasks from specifying the model information and organization to ensuring its quality. Agile Teams populate the model(s) with their respective knowledge and information. With many contributors, large systems may assign a model owner role (often a member of Systems Engineering), who is responsible for model content and structural integrity.

Generate Documentation for Compliance

While SAFe emphasizes models for early discovery and system documentation, many product and engineering domains require documents for regulatory compliance (FAA, FDA, etc.) or contractual obligations (CDRLs in government contracting). In addition, not all stakeholders will be well versed in models or their notations and will continue to communicate with documents.

Wherever possible, documents should be generated from information in system models. (Engineers should not duplicate their efforts by creating compliance documents—a Lean example of waste.) Further, documents can be generated for stakeholders with different system perspectives. Architectural framework standards (e.g., DoDAF, MODAF) define multiple stakeholder viewpoints and different views for communicating system information. Generating documents from models is the best way to ensure consistency across all stakeholder views.

While all products and programs will likely require formal documents, System Engineers are encouraged to work with Customers and/or regulatory agencies on the minimum set sufficient to meet their obligations. The source of most, if not all, of the information resides in engineering repositories that can and should be used for inspections and formal reviews where possible.

Build Model Quality In

As stated previously, models can suffer a quality challenge due to the diversity and number of those who contribute information. Without proper oversight, continuous changes made by many people can cause quality to suffer. The model owner is responsible for working with the teams to define quality practices—model standards and model testing—and to help ensure that they are followed.

The quality practices discussed below facilitate early learning cycles. As SAFe notes, “You can’t scale crappy code,” and the same is true for system designs represented in models. Quality practices allow engineers to confidently and frequently make model changes and contribute to the system intent.

Model Standards

Model standards help control quality and guide teams on how best to model. They can include:

  • What information should be captured
  • Modeling notations (e.g., SysML) and parts of those notations (e.g., Use Case) to use or exclude
  • Where modeling information should be placed for solution and subsystem elements
  • Meta-information that should be stored with different types of model elements
  • Links within the model or with other cross-discipline models
  • Common types and common dimensions used across the system
  • Modeling tool properties and configuration (likely part of the engineering environment)
  • Collaboration practices with any underlying version control system(s) (if relevant)

If documents are being generated from the models, the document templates should be defined early, as they influence many of these decisions. Systems designers need to know where to store the model elements and any metadata or links required on the model elements that may be used for queries, document generation, or compliance.

As a best practice, create an exemplar, high-level, full-system model early on. The example is ideally a skeleton model across the solution (one that defines all subsystems), and one element can illustrate how to model the structure and behavior of a single subsystem. Run any document generation tests against this configuration early to ensure completeness so the teams do not have to rework (waste) the model with missed information.

Create Testable and Executable Models

SAFe’s Test-First practices help teams build quality into their products early and facilitate the continuous, small changes we find in Agile software development. Test-first creates a rich suite of test cases that allow developers to more reliably make changes without causing an error in another part of the system.

Lean practices encourage testable, executable models where appropriate to reduce waste associated with downstream errors. Models should be testable against whatever assessment criteria exist for the domain or discipline. Mechanical models have tests for physical and environmental issues, electrical models for logic, software models for anomalies, and executable system models for system behavior. Most modeling tools provide the ability to check models or to create scripts that can iterate across the models and look for anomalies.

Testing Requirements Models. Textual requirements are used in almost every system and, under the current practice, are typically reviewed manually. The community has long sought executable specifications, where requirements are specified in a form that can be executed for testing. Success, however, has been limited to certain specific domains where the problem is well defined. Agile practices advocate a variant of executable specifications called Acceptance Test-Driven Development (ATDD). Some exploratory work has emerged to support ATDD in systems development. While a promising solution for requirements testing, ATDD has not yet been used at a large scale. Make requirements and tests one and the same where possible, and automate to the extent possible.

Testing Analysis and Design Models. Designs represented in models can be tested both statically and dynamically. Modeling tools typically have static model analyzers or “checkers” that examine the model looking for anomalies. Model owners may add their own rules—model organization, modeling conventions and standards, required meta-information (e.g., SysML tags, stereotypes), etc. Where analyzers do not exist, scripts can iterate over the models looking for violations in the static model.

Models can also be dynamically tested. The models from engineering disciplines have their own solutions for assessing quality and should be leveraged as part of testing practice.

Testing Traceability. Models must comply with the linking structure to ensure proper queries, document generation, and compliance.

Document Generation. While possibly redundant with the traceability scripts above, document generation may have scripts to ensure that the model is structured properly and all data exists to support all document templates. With large models, it is often easier to debug a script than a document template.


Last update: 31 March 2016