4.1 Article

Evaluating pharmacokinetic/pharmacodynamic models using the posterior predictive check

Journal

JOURNAL OF PHARMACOKINETICS AND PHARMACODYNAMICS
Volume 28, Issue 2, Pages 171-192

Publisher

KLUWER ACADEMIC/PLENUM PUBL
DOI: 10.1023/A:1011555016423

Keywords

posterior predictive check; p value; model evaluation; pharmacokinetics; pharmacodynamics

Funding

  1. NIGMS NIH HHS [GM57183] Funding Source: Medline

Ask authors/readers for more resources

The posterior predictive check (PPC) is a model evaluation tool. It assigns a value (pPPC) to the probabilily that the value of a given statistic computed from data arising under an analysis model is as or more extreme than the value computed from the real data themselves. If this probability is too small, the analysis model is regarded as invalid for the given statistic. Properties of the PPC for pharmacokinetic (PK) and pharmacodynamic (PD) model evaluation are examined herein for a particularly simple simulation setting. extensive sampling of a single individual's data arising from simple PK/PD and error models. To test the performance characteristics of the PPC, repeatedly, real data are simulated and for a variety of statistics, the PPC is applied to an analysis model, which may (null hypothesis) or may not (alternative hypothesis) be identical to the simulation model. Five models are used here: (PKI) mono-exponential with proportional error, (PK2) biexponential with proportional err or; (PK2 epsilon) biexponential with additive error, (PD1) E-max model with additive error under the legit transform, and (PDw) sigmoid E-max model with additive err or under the legit transform. Six simulation/analysis settings are studied. The first three, (PK1/PK1), (PK2/PK2), and (PD1/PD1) evaluate whether the PPC has appropriate type-I error level whereas the second three (PK2/PK1), (PK2 epsilon /PK2), and (PD2/PD1) evaluate whether the PPC has adequate power. For a set of 100 data sets simulated/analyzed under each model pair according to a stipulated extensive sampling design, the p(PPC) is computed for a number of statistics in three different ways (each way uses a different approximation to the posterior distribution on the model parameters). We find that in general (i) The PPC is conservative under the null in the sense that for many statistics, prob(p(PPC) less than or equal to alpha) < alpha for small alpha. With respect to such statistics, this means that useful models will rarely be regarded incorrectly as invalid. A high correlation of a statistic with the parameter estimates obtained from the same data used to compute the statistic (a measure of statistical sufficiency; tends to identify the most conservative statistics. (ii) Power is not very great, at least for the alternative models we tested, and it is especially poor with statistics that are in part a function of parameters as well as data. Although there is a tendency for nonsufficient statistics (as we have measured this) to have greater power, this is by no means an infallible diagnostic. (iii) No clear advantage for one or another method of approximating the posterior distribution on model parameters is found.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.1
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available