Verification and Validation of Simulation Models

Definitions: Verification is the process of determining that a model implementation and its associated data accurately represent the developer's conceptual description and specifications.

Validation is the process of determining the degree to which a simulation model and its associated data are an accurate representation of the real world from the perspective of the intended uses of the model [1].

Keywords: accreditation, analysis, data, historical, SME, statistics, subject matter expert, validation, verification

MITRE SE Roles and Expectations: The MITRE systems engineer (SE) is expected to have a sound knowledge of the system being modeled and the software process for developing the model in order to provide effective technical guidance in the design and execution of plans to verify and/or validate a model, or to provide specialized technical expertise in the collection and analysis of varying types of data required to do so. A MITRE SE is expected to be able to work directly with the developer of the system and the simulation model to provide technical insight into model verification and validation. In most cases, the MITRE SE will be responsible for assisting the government sponsoring organization in the formal accreditation of the model.


Modeling and simulation (M&S) can be an important element in the acquisition of systems within government organizations. M&S is used during development to explore the design trade space and inform design decisions, and, in conjunction with testing and analysis, to gain confidence that the design implementation is performing as expected, or to assist troubleshooting if it is not. M&S allows decision makers and stakeholders to quantify certain aspects of performance during the system development phase and to provide supplementary data during the testing phase of system acquisition. More important, M&S may play a key role in the qualification ("sell-off") of a system as a means to reduce the cost of a verification test program. Here, the development of a simulation model that has undergone a formal verification, validation, and accreditation (VV&A) process is not only desirable, but essential.

Consider the following definitions for the phases of the simulation model VV&A process [1]:

  • Verification: "The process of determining that a model implementation and its associated data accurately represent the developer’s conceptual description and specifications."
  • Validation: "The process of determining the degree to which a [simulation] model and its associated data are an accurate representation of the real world from the perspective of the intended uses of the model."
  • Accreditation: "The official certification that a model, simulation, or federation of models and simulations and its associated data are acceptable for use for a specific purpose."
  • Simulation Conceptual Model: "The developer's description of what the model or simulation will represent, the assumptions limiting those representations, and other capabilities needed to satisfy the user's requirements."

Verification answers the question "Have we built the model right?" whereas validation answers the question "Have we built the right model?” [2]. In other words, the verification phase of VV&A focuses on comparing the elements of a simulation model of the system with the description of what the requirements and capabilities of the model were to be.

Verification is an iterative process aimed at determining whether the product of each step in the development of the simulation model fulfills all the requirements levied on it by the previous step and is internally complete, consistent, and correct enough to support the next phase [3].

The validation phase of VV&A focuses on the agreement between the observed behavior of elements of a system with the corresponding elements of a simulation model of the system and on determining whether the differences are acceptable given the intended use of the model. If a satisfactory agreement is not obtained, the model is adjusted to bring it in closer agreement with the observed behavior of the actual system (or errors in observation/experimentation or reference models/analyses are identified and rectified).

Typically, government sponsoring organizations that mandate the use of a formal VV&A process do not specify how each phase should be carried out. Rather, they provide broad guidance that often includes artifacts required from the process.

Independent Verification and Validation (IV&V) activities occur throughout most of the systems engineering development life-cycle phases and are actively connected to them, as depicted in Figure 1, rather than being limited to integration and testing phases.

Figure 1. The Relationship Between IV&V and Systems Engineering Processes

A variety of methods are used to validate simulation models, ranging from comparison to other models to the use of data generated by the actual system (i.e., predictive validation). The most commonly used methods are described in Table 1 [4].

Table 1. Common Simulation Model Validation Methods

Model Validation Method


Comparison to Other Models

Various results (e.g., outputs) of the simulation model being validated are compared to results of other (valid) models. For example, simple cases of a simulation model are compared to known results of analytic models, and the simulation model is compared to other simulation models that have been validated.

Face Validity

Asking individuals knowledgeable about the system whether the model and/or its behavior are reasonable. For example, is the logic in the conceptual model correct and are the model’s input-output relationships reasonable?

Historical Data Validation

If historical data exist (e.g., data collected on a system specifically for building and testing a model), part of the data is used to build the model and the remaining data are used to determine (test) whether the model behaves as the system does.

Parameter Variability – Sensitivity Analysis

This technique consists of changing the values of the input and internal parameters of a model to determine the effect of output on the model's behavior. The same relations should occur in the model as in the real system. This technique can be used qualitatively—directions only of outputs—and quantitatively—both directions and (precise) magnitudes of outputs. Parameters that are sensitive (i.e., cause significant changes in the model's behavior or output) should be made sufficiently accurate prior to using the model.

Predictive Validation

The model is used to predict (forecast) the system's behavior, and then the system's behavior and the model’s forecast are compared to determine if they are the same. The system's data may come from an operational system or be obtained by conducting experiments on the system, e.g., field tests.

With the exception of face validity, all the methods detailed in Table 1 are data-driven approaches to model validation, with predictive validation among the most frequently used method. The use of predictive validation generally requires a significant amount of effort to acquire and analyze data to support model validation.

Best Practices and Lessons Learned

Develop and maintain a model VV&A plan. Develop a detailed model VV&A plan before the start of the acquisition program VV&A process. Distinct from the specification for the model itself, this plan should provide guidance for each phase of VV&A and clarify the difference between the verification and validation phases. The plan should also map model specification requirements to model elements, identify those model elements that require validation, and develop model validation requirements. The plan should describe the method(s) used for validation, including any supporting analysis techniques. If a data-driven approach is used for model validation, provide detail on either the pedigree of existing data or the plan to collect new data to support validation.

Prototype simulations and ad hoc models are often developed where a VV&A plan is not considered beforehand. Then, when the prototype becomes a production system, an attempt is made to perform verification and validation. It is extremely difficult to perform verification after the fact when normal system development artifacts have not been created and there is no audit trail from concept to product.

Establish quantitative model performance requirements. Often, performance requirements for the domain model are neglected while developing the model. Complex systems can have interactions that produce unexpected results in seemingly benign situations. Note that model performance requirements are distinct from model validation requirements,

Prototypes developed with small problem sets may not scale to large problems that production systems will deal with. More model detail does not necessarily generate a better answer and may make a simulation intractable. In discrete event simulation, the appropriate event queue implementation, random number generators, and sorting/searching algorithms can make a huge difference in performance.

Plan for parallel (iterative) model development and VV&A. Carry out the model development and its VV&A process in parallel, with sufficient resources and schedule to allow for several iterations. Figure 2 depicts a notional relationship between model development and the VV&A process. The focus here is the gathering and analysis of validation data for a particular model element and the resulting decision to: (1) adjust the value of one or more parameters associated with the model element to obtain closer agreement with observed system behavior, (2) redesign the model element by factoring in insights obtained from the analysis of validation data, or (3) accredit the model with respect to this element.

Figure 2. Simulation Model Development and the VV&A Process

Although Figure 2 depicts the relationship between model development and VV&A in its entirety, it may be applied to individual, independent model elements. A model VV&A plan should identify those model elements that may be validated independently of others. For those model elements that interact, the plan should identify appropriate verification and validation approaches.


Know what your simulation tool is doing. If a simulation language or tool is used to build the simulation, "hidden" assumptions are built into the tool. Here are four common situations that are handled differently by different simulation tools:

  • Resource recapture: Suppose a part releases a machine (resource) and then immediately tries to recapture the machine for further processing. Should a more highly qualified part capture the machine instead? Some tools assign the machine to the next user without considering the original part a contender. Some tools defer choosing the next user until the releasing entity becomes a contender. Still others reassign the releasing part without considering other contenders.
  • Condition delayed entities: Consider two entities waiting because no units of a resource are available. The first entity requires two units of the resource, and the second entity requires one unit of the resource. When one unit of the resource becomes available, how is it assigned? Some tools assign the resource to the second entity. Other tools assign it to the first entity, which continues to wait for another unit of the resource.
  • Yielding control temporarily: Suppose that an active entity wants to yield control to another entity that can perform some processing, but then wants to become active and continue processing before the simulation clock advances. Some tools do not allow an entity to resume processing at all. Other tools allow an entity to compete with other entities that want to process. Still others give priority to the relinquishing process and allow it to resume.
  • Conditions involving the clock: Suppose an entity needs to wait for a compound condition involving the clock, e.g., "wait until the input buffer is empty or it is exactly 5:00 p.m." Generally, the programmer will have to "trick" the system to combine timed events with other conditions. An event at 5:00 p.m. could check to see if the buffer was empty, and if so, assume that the buffer-empty event occurred earlier.

Test for discrete math issues. Computer simulation models use discrete representations for numbers. This can cause strange behaviors. When testing models, always include tests at the extremes. Examples of errors found during testing include overflow for integers and subtraction of floating point numbers with insufficient mantissa representation. Conversion from number systems could cause rounding errors that are significant in the simulation.

Subsystem verification and validation do not guarantee system credibility. Each submodel may produce valid results and the integration of models can be verified to be correct, but the simulation can still produce invalid results. Most often this occurs when the subsystem conceptual design includes factors that are not considered in the system conceptual design, or vice versa. For example, in a recently reviewed simulation system with multiple subcomponents, one of the subcomponents simulated maintenance on vehicles, including 2½ ton trucks. The submodel had no need to distinguish between truck types; however, vehicle type distinction was important to another module. In the system, tanker trucks used for hauling water or petroleum were being returned from the maintenance submodel and became ambulances—alphabetically the first type of 2½ ton truck. Interactions between subsystem/submodel components should be addressed.


Establish quantitative model validation requirements. Specify the degree to which each element (component, parameter) of the model is to be validated. For model elements based on a stochastic process, validation is often based on statistical tests of agreement between the behavior of the model and that of the actual system. If you use a hypothesis test to assess agreement, you need to specify both allowable Type I and Type II risks (errors) as part of an unambiguous and complete validation requirement. If, instead, you use a confidence interval to assess agreement, you need to specify the maximum allowable interval width (i.e., precision) and a confidence level.

In certain instances, whether you use a hypothesis test or a confidence interval may be a matter of preference; however, in some instances, you will need to use a hypothesis test due to the nature of the model element being validated. Regardless, the development of a validation requirement is often an involved task requiring the use of analytic or computational methods to determine allowable levels of validation risk and/or precision. Allocate sufficient time, resources, and expertise to this task.

Develop model validation "trade-off" curves. If you use predictive validation, you will need to determine the amount of data required to achieve a quantitative model validation requirement (see above). Determining the amount of data required (N) to achieve, say, a maximum allowable confidence interval width at a specified confidence level for any model element may require extensive analytical or computational methods. Related to this is solving the "inverse" problem: determining the maximum (or expected) confidence interval width given a particular value of N. From this, a set of curves may be constructed to allow for the "tradeoff" between the amount of validation data required (which, generally, drives the cost of the validation effort) in order to achieve a validation requirement. Address this problem as early as possible during the validation phase of the VV&A process.

Not every model element requires validation. Model elements associated with model functional (versus performance) requirements usually do not require validation. These elements are dealt with in the verification phase of the VV&A process. Trivial examples of this are model elements that allow the user to select various model options (i.e., "switches" or "knobs"). Non-trivial examples are elements that have been identified as not being relevant or critical to the intended use of the model; however, a model element that has not been deemed critical may, in fact, be fully functional when the simulation model is deployed. In this case, the model element (e.g., an option to add "noise" to the model output) may still be exercised, but the model accreditation documentation should note that this particular element has been "verified, but not validated."

Allow for simplifications in the model. Almost always, some observed behaviors of the actual system will be difficult to model and/or validate given the scope and resources of the model development and validation efforts. In such cases, using simplifications (or approximations) in the model may provide an acceptable way forward. For example, if a model element requires a stochastic data-generating mechanism, you may use a probability density function with a limited number of parameters (e.g., a Gaussian) in place of what appears to be, based on analysis of data from the actual system, a more complex data-generating mechanism. In doing this, use a conservative approach. That is, in this example, employing a simplified data-generating mechanism in the model should not result in overly optimistic behavior with respect to the actual system.

Consider the partial model validation of selected model elements. When the validation of a particular model element becomes problematic, it may be acceptable to validate the model element over a subset of the element's defined range. This partial validation of a model element may be a viable option when either insufficient data is available to enable validation, or the actual system behavior is difficult to model even if reasonable simplifications are made. However, the resulting model will be valid with respect to this element only within this limited range. Note this fact in the accreditation documentation.

Use multiple approaches to model validation. When data-driven model validation is not possible or practical, you can use face validity; however, even when data-driven validation can be carried out, you can use face validity (i.e., expert review) as a first step in the validation phase. If a similar but already validated model is available, performing a comparison of the model being developed to this existing model may provide an initial (or preliminary) model validation. Following this initial validation, you can use a more comprehensive approach, such as predictive validation.

Establish a model validation working group. A regular and structured working group involving the sponsoring government organization, the system developer, and the model developer will result in a superior simulation model. The function of this group should be the generation and review of artifacts from the model validation phase. When data-driven validation is used, the majority of the effort should focus on the information products produced by the statistical analysis of validation data. This group may also provide recommendations regarding the need to collect additional validation data, should an increase in the quality of the validation results be necessary.

Invest in analytic capabilities and resources. Plan for the availability of subject matter experts, sufficient computing and data storage capability, and analysis software early in the VV&A process. For many VV&A efforts, at least two subject matter experts will be required: one who has knowledge of the system being modeled and another who has broad knowledge in areas of data reduction, statistical analysis, and, possibly, a variety of operations research techniques. If analysis software is needed, consider using open source packages—many provide all the data reduction, statistical analysis, and graphical capability needed to support validation efforts.


The successful verification and validation of a simulation model requires the government sponsoring organization to address VV&A early in the life of an acquisition program. Key activities include:

  • The development of a model VV&A plan with quantitative model validation requirements (where appropriate).
  • Detailed planning for the collection of validation data (if data-driven validation is needed).
  • The assembly of a working group that includes both domain experts and analysts.

Model verification and validation, as well as the development of the model itself, should not be carried out sequentially, but in a parallel and iterative manner.

References and Resources

  1. U.S. Department of Defense, DoD Modeling and Simulation (M&S) Verification, Validation, and Accreditation (VV&A), December 9, 2009, DoD Instruction 5000.61.
  2. Cook, D. A., and J. M. Skinner, May 2005, How to Perform Credible Verification, Validation, and Accreditation for Modeling and Simulation, CrossTalk: The Journal of Defense Software Engineering.
  3. Lewis, R. O., 1992, Independent Verification and Validation, Wiley & Sons.
  4. Law, A. M., 2008, How to Build Valid and Credible Simulation Models, paper presented at the IEEE Winter Simulation Conference.

Additional References and Resources

Law, A. M., 2014, Simulation Modeling and Analysis, 5th Ed., McGraw Hill.


Download the SEG

MITRE's Systems Engineering Guide

Download for EPUB
Download for Amazon Kindle
Download a PDF

Contact the SEG Team