4.7 Article

Consistency checks to improve measurement with the Hamilton Rating Scale for Anxiety (HAM-A)

Journal

JOURNAL OF AFFECTIVE DISORDERS
Volume 325, Issue -, Pages 429-436

Publisher

ELSEVIER
DOI: 10.1016/j.jad.2023.01.029

Keywords

Hamilton Anxiety Rating Scale; Consistency of measurement; Careless ratings; Inconsistent ratings

Ask authors/readers for more resources

The International Society for CNS Clinical Trials and Methodology developed consistency checks for the widely used Hamilton Anxiety Rating Scale (HAM-A) and Clinical Global Impression of Severity of anxiety (CGI-S). The checks revealed that 35% of ratings had at least one flag of inconsistency, with 19% having one flag and 16% having two or more. Applying flags to clinical ratings can aid in detecting imprecise measurement and improve the reliability and validity of trial data.
Background: Mitigating rating inconsistency can improve measurement fidelity and detection of treatment response. Methods: The International Society for CNS Clinical Trials and Methodology convened an expert Working Group that developed consistency checks for ratings of the Hamilton Anxiety Rating Scale (HAM-A) and Clinical Global Impression of Severity of anxiety (CGI-S) that are widely used in studies of mood and anxiety disorders. Flags were applied to 40,349 HAM-A administrations from 15 clinical trials and to Monte Carlo-simulated data as a proxy for applying flags under conditions of inconsistency. Results: Thirty-three flags were derived these included logical consistency checks and statistical outlier-response pattern checks. Twenty-percent of the HAM-A administrations had at least one logical scoring inconsistency flag, 4 % had two or more. Twenty-six percent of the administrations had at least one statistical outlier flag and 11 % had two or more. Overall, 35 % of administrations had at least one flag of any type, 19 % had one and 16 % had 2 or more. Most of administrations in the Monte Carlo- simulated data raised multiple flags. Limitations: Flagged ratings may represent less-common presentations of administrations done correctly. Conclusions-Application of flags to clinical ratings may aid in detecting imprecise measurement. Flags can be used for monitoring of raters during an ongoing trial and as part of post-trial evaluation. Appling flags may improve reliability and validity of trial data.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available