Vector radiative transfer: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Peteymills
No edit summary
 
en>Peteymills
→‎References: corrected parameter in citation: institute->institution
 
Line 1: Line 1:
The stylish wardrobe of Maggie Gyllenhaal�s role in BBC Two�s hard-hitting political thriller, The Honourable Woman, has caught the attention of the eagle-eyed viewers.<br><br>The eight-part series, set against the backdrop of the Israeli-Palestinian conflict, centres around Nessa Stein, played by Gyllenhaal. Stein is an Anglo-Israeli businesswoman recently ennobled in the House of Lords who devotes herself to philanthropic purposes across the Middle East, but hides a secret past from her time spent in Gaza eight years earlier.<br><br>Through the unravelling of her public and private life played out on an international, political stage, Stein parades in an increasingly impressive selection of outfits.<br>�Because the character of Nessa is so complicated and multi layered, we looked at all sorts of different people as reference. I suppose we started off by looking at other powerful and [http://data.gov.uk/data/search?q=stylish+women stylish women] through history, Jackie Kennedy, Eva Peron, Margaret Thatcher, Cleopatra� Edward K Gibbon costume designer for the series told The Independent<br><br>
{{Orphan|date=May 2012}}
Maggie Gyllenhaal The Honourable Woma<br>
�And then we kind of threw all the reference away and started afresh. The way Maggie looked as Nessa was constantly evolving throughout the six month shoot.�
The series opens with Nessa clad in a Roland Mouret power dress. Her day to day look is a sartorial dream with tailored suits by the likes of Stella McCartney, Acne, Escada, Pringle and vintage Chane<br>
�Silk blouses and wide legged pants based on 1970s Yves Saint Laurent originals were created by Hilary Marschner� explains Gibb<br><br>


Outerwear includes coats by Mulberry, vintage finds from [https://Www.Google.com/search?hl=en&gl=us&tbm=nws&q=Jil+Sander&btnI=lucky Jil Sander] and a 1980s Gieves and Hawkes men�s co<br>.
'''Verification and Validation of Computer Simulation Models''' is conducted during the development of a simulation model with the ultimate goal of producing an accurate and credible model.<ref name ="Banks">Banks, Jerry; Carson, John S.; Nelson, Barry L.; Nicol, David M. ''Discrete-Event System Simulation'' Fifth Edition, Upper Saddle River, Pearson Education, Inc. 2010 ISBN 0136062121</ref><ref name = "Schlesinger">Schlesinger, S. 1979 el al. Terminology for model credibility. Simulation 32 (3): 103-104.</ref>  "Simulation models are increasingly being used to solve problems and to aid in decision-making. The developers and users of these models, the decision makers using information obtained from the results of these models, and the individuals affected by decisions based on such models are all rightly concerned with whether a model and its results are “correct”".<ref name ="Sargent">Sargent, Robert G. VERIFICATION AND VALIDATION OF SIMULATION MODELS. Proceedings of the 2011 Winter Simulation Conference. http://www.informs-sim.org/wsc11papers/016.pdf</ref>  This concern is addressed through verification and validation of the simulation model.
Even curled up in her panic room at night she sports silk slips by haute couture Parisian lingerie designer Carine Gilson and London based lingerie label Bod<br>.
In pictures: Nessa Stein's wardrobe in The Honourable Woma<br>  
Shoes are by Acne, Christian Louboutin  and [http://www.pcs-systems.co.uk/Images/celinebag.aspx Cheap Celine Bags]. With bags from Mulberry and John Lewis. �Nessa's wardrobe runs the full gamut from designer, through High Street, Charity shops and bespoke pieces� says Gib<br><br>


�The clothing is always the way in [to the character]� Gyllenhaal told WWD. �I never played a character that didn�t care about what they were wearing.
Simulation models are approximate imitations of real-world systems and they never exactly imitate the real-world system.  Due to that, a model should be verified and validated to the degree needed for the models intended purpose or application.<ref name ="Sargent" />
The Honourable Woman continues tonight, BBC2 at 9pm.
 
The verification and validation of simulation model starts after functional specifications have been documented and initial model development has been completed.<ref name ="Carson" >Carson, John, MODEL VERIFICATION AND VALIDATION. Proceedings of the 2002 Winter Simulation Conference. http://informs-sim.org/wsc02papers/008.pdf</ref>  Verification and validation is an iterative process that takes place throughout the development of a model.<ref name ="Banks" /><ref name ="Carson" />
 
== Verification ==
In the context of computer simulation '''verification''' of a model is the process of confirming it is correctly implemented with respect to the conceptual model (it matches specifications and assumptions deemed acceptable for the given purpose of application).<ref name ="Banks" /><ref name ="Carson" />  During verification the model is tested to find and fix errors in the implementation of the model.<ref name ="Carson" />  Various processes and techniques are used to assure the model matches specifications and assumptions with respect to the model concept.  The objective of model verification is to ensures that the implementation of the model is correct.
 
There are many techniques that can be utilized to verify a model.  Including, but not limited to, have the model checked by an expert, making logic flow diagrams that include each logically possible action, examining the model output for reasonableness under a variety of settings of the input parameters, and using an interactive debugger.<ref name ="Banks" />  Many software engineering techniques used for [[software verification]] are applicable to simulation model verification.<ref name ="Banks" />
 
== Validation ==
Validation checks the accuracy of the model's representation of the real system.  Model validation is defined to mean “substantiation that a computerized model within its domain of applicability possesses a satisfactory range of accuracy consistent with the intended application of the model”.<ref name ="Sargent" />  A model should be built for a specific purpose or set of objectives and its validity determined for that purpose.<ref name ="Sargent" /> 
 
There are many approaches that can be used to validate a computer model.  The approaches range from subjective reviews to objective statistical tests.  One approach that is commonly used is to have the model builders determine validity of the model through a series of tests.<ref name ="Sargent" /> 
 
Naylor and Finger [1967] formulated a three-step approach to model validation that has been widely followed:<ref name ="Banks" />
 
Step 1. Build a model that has high face validity.
 
Step 2. Validate model assumptions.
 
Step 3. Compare the model input-output transformations to corresponding input-output transformations for the real system.<ref>NAYLOR, T. H., AND J. M. FINGER [ 1967], “ Verification of Computer Simulation Models,” Management Science, Vol. 2, pp. B92– B101., cited in Banks, Jerry; Carson, John S.; Nelson, Barry L.; Nicol, David M. ''Discrete-Event System Simulation'' Fifth Edition, Upper Saddle River, Pearson Education, Inc. 2010 p. 396 ISBN 0136062121, http://mansci.journal.informs.org/content/14/2/B-92</ref>
 
=== Face Validity ===
A model that has '''face validity''' appears to be a reasonable imitation of a real-world system to people who are knowledgeable of the real world system.<ref name ="Carson" />  Face validity is tested by having users and people knowledgeable with the system examine model output for reasonableness and in the process identify deficiencies.<ref name ="Banks" />  An added advantage of having the users involved in validation is that the model's credibility to the users and the user's confidence in the model increases.<ref name ="Banks" /><ref name ="Carson" />  Sensitivity to model inputs can also be used to judge face validity.<ref name ="Banks" />  For example, if a simulation of a fast food restaurant drive through was run twice with customer arrival rates of 20 per hour and 40 per hour then model outputs such as average wait time or maximum number of customers waiting would be expected to increase with the arrival rate.
 
=== Validation of Model Assumptions ===
Assumptions made about a model generally fall into two categories: structural assumptions about how system works and data assumptions.
 
==== Structural Assumptions ====
Assumptions made about how the system operates and how it is physically arranged are structural assumptions.  For example, the number of servers in a fast food drive through lane and if there is more than one how are they utilized?  Do the servers work in parallel where a customer completes a transaction by visiting a single server or does one server take orders and handle payment while the other prepares and serves the order. Many structural problems in the model come from poor or incorrect assumptions.<ref name="Carson" />  If possible the workings of the actual system should be closely observed to understand how it operates.<ref name="Carson" />  The systems structure and operation should also be verified with users of the actual system.<ref name="Banks" />
 
==== Data Assumptions ====
There must be a sufficient amount of appropriate data available to build a conceptual model and validate a model.  Lack of appropriate data is often the reason attempts to validate a model fail.<ref name ="Sargent" />  Data should be verified to come from a reliable source.  A typical error is assuming an inappropriate statistical distribution for the data.<ref name="Banks" />  The assumed statistical model should be tested using goodness of fit tests and other techniques.<ref name ="Banks" /><ref name ="Sargent" /> Examples of goodness of fit tests are the [[Kolmogorov–Smirnov test]] and the [[chi-square test]].  Any outliers in the data should be checked.<ref name ="Sargent" />
 
=== Validating Input-Output Transformations ===
The model is viewed as an input-output transformation for these tests.  The validation test consists of comparing outputs from the system under consideration to model outputs for the same set of input conditions.  Data recorded while observing the system must be available in order to perform this test.<ref name ="Sargent" />  The model output that is of primary interest should used as the measure of performance.<ref name ="Banks" />  For example, if system under consideration is a fast food drive through where input to model is customer arrival time and the output measure of performance is average customer time in line, then the actual arrival time and time spent in line for customers at the drive through would be recorded.  The model would be run with the actual arrival times and the model average time in line would be compared actual average time spent in line using one or more tests.
 
==== Hypothesis Testing ====
[[Statistical hypothesis testing]] using the [[Student's t-test|t-test]] can be used as a basis to accept the model as valid or reject it as invalid. 
 
The hypothesis to be tested is
:H<sub>0</sub> the model measure of performance = the system measure of performance
versus
:H<sub>1</sub> the measure of performance ≠ the measure of performance.
 
The test is conducted for a given sample size and level of significance or α.  To perform the test a number ''n'' statistically independent runs of the model are conducted and an average or expected value, E(Y), for the variable of interest is produced.  Then the test statistic, ''t''<sub>0</sub> is computed for the given α, ''n'', E(Y) and the observed value for the system  μ<sup>0</sup> 
 
: <math>t_0 = {(E(Y)-u_0)}/{(S/\sqrt{n})}</math> and the critical value for α and n-1 the degrees of freedom
 
: <math>t_{a/2,n-1}</math> is calculated.
 
If
: <math> \left\vert t_0 \right\vert > t_{a/2,n-1}</math>
reject H<sub>0</sub>, the model needs adjustment.
 
There are two types of error that can occur using hypothesis testing, rejecting a valid model called type I error or "model builders risk" and accepting an invalid model called Type II error, β, or "model user's risk".<ref name ="Sargent" />  The level of significance or α is equal the probability of type I error.<ref name ="Sargent" />  If α is small then rejecting the null hypothesis is a strong conclusion.<ref name ="Banks" />  For example, if α = 0.05 and the null hypothesis is rejected there is only a 0.05 probability of rejecting a model that is valid.  Decreasing the probability of a type II error is very important.<ref name ="Banks" /><ref name ="Sargent" />  The probability of correctly detecting an invalid model is 1 - β.  The probability of a type II error is dependent of the sample size and the actual difference between the sample value and the observed value.  Increasing the sample size decreases the risk of a type II error.
 
===== Model Accuracy as a Range =====
A statistical technique where the amount of model accuracy is specified as a range has recently been developed.  The technique uses hypothesis testing to accept a model if the difference between a model's variable of interest and a system's variable of interest is within a specified range of accuracy.<ref name="Sargent2">Sargent, R. G. 2010. “A New Statistical Procedure for Validation of Simulation and Stochastic Models.” Technical Report SYR-EECS-2010-06, Department of Electrical Engineering and Computer Science, Syracuse University, Syracuse, New York.</ref>  A requirement is that both the system data and model data be approximately [[Normal distribution|Normally]] [[Independent and identically distributed random variables|Independent and Identically Distributed (NIID)]].  The [[Student's t-test|t-test]] statistic is used in this technique.  If the mean of the model is μ<sup>m</sup> and the mean of system is μ<sup>s</sup> then the difference between the model and the system is D = μ<sup>m</sup> - μ<sup>s</sup>.  The hypothesis to be tested is if D is within the acceptable range of accuracy. Let L = the lower limit for accuracy and U = upper limit for accuracy.  Then
 
:H<sub>0</sub> L ≤ D ≤ U
versus
:H<sub>1</sub> D < L or D > U
 
is to be tested.
 
The operating characteristic (OC) curve is the probability that the null hypothesis is accepted when it is true.  The OC curve characterizes the probabilities of both type I and II errors.  Risk curves for model builder's risk and model user's can be develop from the OC curves.  Comparing curves with fixed sample size tradeoffs between model builder's risk and model user's risk can be seen easily in the risk curves.<ref name="Sargent2" />  If model builder's risk, model user's risk, and the upper and lower limits for the range of accuracy are all specified then the sample size needed can be calculated.<ref name="Sargent2" />
 
==== Confidence Intervals ====
Confidence intervals can be used to evaluate if a model is "close enough"<ref name ="Banks" /> to a system for some variable of interest.  The difference between the known model value, μ<sub>0</sub>, and the system value, μ, is checked to see if it is less than a value small enough that the model is valid with respect that variable of interest.  The value is denoted by the symbol ε.  To perform the test a number, ''n'', statistically independent runs of the model are conducted and a mean or expected value, E(Y) or μ for simulation output variable of interest Y, with a standard deviation ''S'' is produced.  A confidence level is selected, 100(1-α).  An interval, [a,b], is constructed by
 
: <math>a = E(Y) - t_{a/2,n-1}S/\sqrt{n} \qquad and \qquad b = E(Y) + t_{a/2,n-1}S/\sqrt{n}</math>,
where
: <math>t_{a/2,n-1}</math>
is the critical value from the t-distribution for the given level of significance and n-1 degrees of freedom.
: If |a-μ<sub>0</sub>| > ε and |b-μ<sub>0</sub>| > ε then the model needs to be calibrated since in both cases the difference is larger than acceptable.
: If |a-μ<sub>0</sub>| < ε and |b-μ<sub>0</sub>| < ε then the model is acceptable as in both cases the error is close enough.
: If |a-μ<sub>0</sub>| < ε and |b-μ<sub>0</sub>| > ε or [[vice-versa]] then additional runs of the model are needed to shrink the interval.
 
==== Graphical Comparisons ====
If statistical assumptions cannot be satisfied or there is insufficient data for the system a graphical comparisons of model outputs to system outputs can be used to make a subjective decisions, however other objective tests are preferable.<ref name ="Sargent" />
 
== See also ==
* [[Verification and validation]]
* [[Verification and validation (software)]]
 
== References ==
{{Reflist}}
 
{{DEFAULTSORT:Computer Simulation}}
<!--Categories-->
[[Category:Scientific modeling]]
[[Category:Simulation software| ]]
[[Category:Formal methods]]

Latest revision as of 21:59, 16 January 2014

Template:Orphan

Verification and Validation of Computer Simulation Models is conducted during the development of a simulation model with the ultimate goal of producing an accurate and credible model.[1][2] "Simulation models are increasingly being used to solve problems and to aid in decision-making. The developers and users of these models, the decision makers using information obtained from the results of these models, and the individuals affected by decisions based on such models are all rightly concerned with whether a model and its results are “correct”".[3] This concern is addressed through verification and validation of the simulation model.

Simulation models are approximate imitations of real-world systems and they never exactly imitate the real-world system. Due to that, a model should be verified and validated to the degree needed for the models intended purpose or application.[3]

The verification and validation of simulation model starts after functional specifications have been documented and initial model development has been completed.[4] Verification and validation is an iterative process that takes place throughout the development of a model.[1][4]

Verification

In the context of computer simulation verification of a model is the process of confirming it is correctly implemented with respect to the conceptual model (it matches specifications and assumptions deemed acceptable for the given purpose of application).[1][4] During verification the model is tested to find and fix errors in the implementation of the model.[4] Various processes and techniques are used to assure the model matches specifications and assumptions with respect to the model concept. The objective of model verification is to ensures that the implementation of the model is correct.

There are many techniques that can be utilized to verify a model. Including, but not limited to, have the model checked by an expert, making logic flow diagrams that include each logically possible action, examining the model output for reasonableness under a variety of settings of the input parameters, and using an interactive debugger.[1] Many software engineering techniques used for software verification are applicable to simulation model verification.[1]

Validation

Validation checks the accuracy of the model's representation of the real system. Model validation is defined to mean “substantiation that a computerized model within its domain of applicability possesses a satisfactory range of accuracy consistent with the intended application of the model”.[3] A model should be built for a specific purpose or set of objectives and its validity determined for that purpose.[3]

There are many approaches that can be used to validate a computer model. The approaches range from subjective reviews to objective statistical tests. One approach that is commonly used is to have the model builders determine validity of the model through a series of tests.[3]

Naylor and Finger [1967] formulated a three-step approach to model validation that has been widely followed:[1]

Step 1. Build a model that has high face validity.

Step 2. Validate model assumptions.

Step 3. Compare the model input-output transformations to corresponding input-output transformations for the real system.[5]

Face Validity

A model that has face validity appears to be a reasonable imitation of a real-world system to people who are knowledgeable of the real world system.[4] Face validity is tested by having users and people knowledgeable with the system examine model output for reasonableness and in the process identify deficiencies.[1] An added advantage of having the users involved in validation is that the model's credibility to the users and the user's confidence in the model increases.[1][4] Sensitivity to model inputs can also be used to judge face validity.[1] For example, if a simulation of a fast food restaurant drive through was run twice with customer arrival rates of 20 per hour and 40 per hour then model outputs such as average wait time or maximum number of customers waiting would be expected to increase with the arrival rate.

Validation of Model Assumptions

Assumptions made about a model generally fall into two categories: structural assumptions about how system works and data assumptions.

Structural Assumptions

Assumptions made about how the system operates and how it is physically arranged are structural assumptions. For example, the number of servers in a fast food drive through lane and if there is more than one how are they utilized? Do the servers work in parallel where a customer completes a transaction by visiting a single server or does one server take orders and handle payment while the other prepares and serves the order. Many structural problems in the model come from poor or incorrect assumptions.[4] If possible the workings of the actual system should be closely observed to understand how it operates.[4] The systems structure and operation should also be verified with users of the actual system.[1]

Data Assumptions

There must be a sufficient amount of appropriate data available to build a conceptual model and validate a model. Lack of appropriate data is often the reason attempts to validate a model fail.[3] Data should be verified to come from a reliable source. A typical error is assuming an inappropriate statistical distribution for the data.[1] The assumed statistical model should be tested using goodness of fit tests and other techniques.[1][3] Examples of goodness of fit tests are the Kolmogorov–Smirnov test and the chi-square test. Any outliers in the data should be checked.[3]

Validating Input-Output Transformations

The model is viewed as an input-output transformation for these tests. The validation test consists of comparing outputs from the system under consideration to model outputs for the same set of input conditions. Data recorded while observing the system must be available in order to perform this test.[3] The model output that is of primary interest should used as the measure of performance.[1] For example, if system under consideration is a fast food drive through where input to model is customer arrival time and the output measure of performance is average customer time in line, then the actual arrival time and time spent in line for customers at the drive through would be recorded. The model would be run with the actual arrival times and the model average time in line would be compared actual average time spent in line using one or more tests.

Hypothesis Testing

Statistical hypothesis testing using the t-test can be used as a basis to accept the model as valid or reject it as invalid.

The hypothesis to be tested is

H0 the model measure of performance = the system measure of performance

versus

H1 the measure of performance ≠ the measure of performance.

The test is conducted for a given sample size and level of significance or α. To perform the test a number n statistically independent runs of the model are conducted and an average or expected value, E(Y), for the variable of interest is produced. Then the test statistic, t0 is computed for the given α, n, E(Y) and the observed value for the system μ0

and the critical value for α and n-1 the degrees of freedom
is calculated.

If

reject H0, the model needs adjustment.

There are two types of error that can occur using hypothesis testing, rejecting a valid model called type I error or "model builders risk" and accepting an invalid model called Type II error, β, or "model user's risk".[3] The level of significance or α is equal the probability of type I error.[3] If α is small then rejecting the null hypothesis is a strong conclusion.[1] For example, if α = 0.05 and the null hypothesis is rejected there is only a 0.05 probability of rejecting a model that is valid. Decreasing the probability of a type II error is very important.[1][3] The probability of correctly detecting an invalid model is 1 - β. The probability of a type II error is dependent of the sample size and the actual difference between the sample value and the observed value. Increasing the sample size decreases the risk of a type II error.

Model Accuracy as a Range

A statistical technique where the amount of model accuracy is specified as a range has recently been developed. The technique uses hypothesis testing to accept a model if the difference between a model's variable of interest and a system's variable of interest is within a specified range of accuracy.[6] A requirement is that both the system data and model data be approximately Normally Independent and Identically Distributed (NIID). The t-test statistic is used in this technique. If the mean of the model is μm and the mean of system is μs then the difference between the model and the system is D = μm - μs. The hypothesis to be tested is if D is within the acceptable range of accuracy. Let L = the lower limit for accuracy and U = upper limit for accuracy. Then

H0 L ≤ D ≤ U

versus

H1 D < L or D > U

is to be tested.

The operating characteristic (OC) curve is the probability that the null hypothesis is accepted when it is true. The OC curve characterizes the probabilities of both type I and II errors. Risk curves for model builder's risk and model user's can be develop from the OC curves. Comparing curves with fixed sample size tradeoffs between model builder's risk and model user's risk can be seen easily in the risk curves.[6] If model builder's risk, model user's risk, and the upper and lower limits for the range of accuracy are all specified then the sample size needed can be calculated.[6]

Confidence Intervals

Confidence intervals can be used to evaluate if a model is "close enough"[1] to a system for some variable of interest. The difference between the known model value, μ0, and the system value, μ, is checked to see if it is less than a value small enough that the model is valid with respect that variable of interest. The value is denoted by the symbol ε. To perform the test a number, n, statistically independent runs of the model are conducted and a mean or expected value, E(Y) or μ for simulation output variable of interest Y, with a standard deviation S is produced. A confidence level is selected, 100(1-α). An interval, [a,b], is constructed by

,

where

is the critical value from the t-distribution for the given level of significance and n-1 degrees of freedom.

If |a-μ0| > ε and |b-μ0| > ε then the model needs to be calibrated since in both cases the difference is larger than acceptable.
If |a-μ0| < ε and |b-μ0| < ε then the model is acceptable as in both cases the error is close enough.
If |a-μ0| < ε and |b-μ0| > ε or vice-versa then additional runs of the model are needed to shrink the interval.

Graphical Comparisons

If statistical assumptions cannot be satisfied or there is insufficient data for the system a graphical comparisons of model outputs to system outputs can be used to make a subjective decisions, however other objective tests are preferable.[3]

See also

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

  1. 1.00 1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.08 1.09 1.10 1.11 1.12 1.13 1.14 1.15 Banks, Jerry; Carson, John S.; Nelson, Barry L.; Nicol, David M. Discrete-Event System Simulation Fifth Edition, Upper Saddle River, Pearson Education, Inc. 2010 ISBN 0136062121
  2. Schlesinger, S. 1979 el al. Terminology for model credibility. Simulation 32 (3): 103-104.
  3. 3.00 3.01 3.02 3.03 3.04 3.05 3.06 3.07 3.08 3.09 3.10 3.11 3.12 Sargent, Robert G. VERIFICATION AND VALIDATION OF SIMULATION MODELS. Proceedings of the 2011 Winter Simulation Conference. http://www.informs-sim.org/wsc11papers/016.pdf
  4. 4.0 4.1 4.2 4.3 4.4 4.5 4.6 4.7 Carson, John, MODEL VERIFICATION AND VALIDATION. Proceedings of the 2002 Winter Simulation Conference. http://informs-sim.org/wsc02papers/008.pdf
  5. NAYLOR, T. H., AND J. M. FINGER [ 1967], “ Verification of Computer Simulation Models,” Management Science, Vol. 2, pp. B92– B101., cited in Banks, Jerry; Carson, John S.; Nelson, Barry L.; Nicol, David M. Discrete-Event System Simulation Fifth Edition, Upper Saddle River, Pearson Education, Inc. 2010 p. 396 ISBN 0136062121, http://mansci.journal.informs.org/content/14/2/B-92
  6. 6.0 6.1 6.2 Sargent, R. G. 2010. “A New Statistical Procedure for Validation of Simulation and Stochastic Models.” Technical Report SYR-EECS-2010-06, Department of Electrical Engineering and Computer Science, Syracuse University, Syracuse, New York.