4.2 Article

A suggested method for dispersion model evaluation

Journal

JOURNAL OF THE AIR & WASTE MANAGEMENT ASSOCIATION
Volume 64, Issue 3, Pages 255-264

Publisher

TAYLOR & FRANCIS INC
DOI: 10.1080/10962247.2013.833147

Keywords

-

Ask authors/readers for more resources

Too often operational atmospheric dispersion models are evaluated in their ability to replicate short-term concentration maxima, when in fact a valid model evaluation procedure would evaluate a model's ability to replicate ensemble-average patterns in hourly concentration values. A valid model evaluation includes two basic tasks: In Step 1 we should analyze the observations to provide average patterns for comparison with modeled patterns, and in Step 2 we should account for the uncertainties inherent in Step 1 so we can tell whether differences seen in a comparison of performance of several models are statistically significant. Using comparisons of model simulation results from AERMOD and ISCST3 with tracer concentration values collected during the EPRI Kincaid experiment, a candidate model evaluation procedure is demonstrated that assesses whether a model has the correct total mass at the receptor level (crosswind integrated concentration values) and whether a model is correctly spreading the mass laterally (lateral dispersion), and assesses the uncertainty in characterizing the transport. The use of the BOOT software (preferably using the ASTM D 6589 resampling procedure) is suggested to provide an objective assessment of whether differences in model performance between models are significant. Implications: Regulatory agencies can choose to treat modeling results as pseudo-monitors, but air quality models actually only predict what they are constructed to predict, which certainly does not include the stochastic variations that result in observed short-term maxima (e.g., arc-maxima). Models predict the average concentration pattern of a collection of hours having very similar dispersive conditions. An easy-to-implement evaluation procedure is presented that challenges a model to properly estimate ensemble average concentration values, reveals where to look in a model to remove bias, and provides statistical tests to assess the significance of skill differences seen between competing models.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.2
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available