4.5 Review

Method and reporting quality in health professions education research: a systematic review

Journal

MEDICAL EDUCATION
Volume 45, Issue 3, Pages 227-238

Publisher

WILEY
DOI: 10.1111/j.1365-2923.2010.03890.x

Keywords

-

Funding

  1. Mayo Foundation
  2. John R Evans Chair in Health Sciences Education Research at McMaster University, Hamilton, Ontario

Ask authors/readers for more resources

Context Studies evaluating reporting quality in health professions education (HPE) research have demonstrated deficiencies, but none have used comprehensive reporting standards. Additionally, the relationship between study methods and effect size (ES) in HPE research is unknown. Objectives This review aimed to evaluate, in a sample of experimental studies of Internet-based instruction, the quality of reporting, the relationship between reporting and methodological quality, and associations between ES and study methods. Methods We conducted a systematic search of databases including MEDLINE, Scopus, CINAHL, EMBASE and ERIC, for articles published during 1990-2008. Studies (in any language) quantifying the effect of Internet-based instruction in HPE compared with no intervention or other instruction were included. Working independently and in duplicate, we coded reporting quality using the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement, and coded study methods using a modified Newcastle-Ottawa Scale (m-NOS), the Medical Education Research Study Quality Instrument (MERSQI), and the Best Evidence in Medical Education (BEME) global scale. Results For reporting quality, articles scored a mean +/- standard deviation (SD) of 51 +/- 25% of STROBE elements for the Introduction, 58 +/- 20% for the Methods, 50 +/- 18% for the Results and 41 +/- 26% for the Discussion sections. We found positive associations (all p < 0.0001) between reporting quality and MERSQI (rho = 0.64), m-NOS (rho = 0.57) and BEME (rho = 0.58) scores. We explored associations between study methods and knowledge ES by subtracting each study's ES from the pooled ES for studies using that method and comparing these differences between subgroups. Effect sizes in single-group pretest/post-test studies differed from the pooled estimate more than ESs in two-group studies (p = 0.013). No difference was found between other study methods (yes/no: representative sample, comparison group from same community, randomised, allocation concealed, participants blinded, assessor blinded, objective assessment, high follow-up). Conclusions Information is missing from all sections of reports of HPE experiments. Single-group pre-/post-test studies may overestimate ES compared with two-group designs. Other methodological variations did not bias study results in this sample.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available