4.4 Review

State-of-the-art Dashboards on Clinical Indicator Data to Support Reflection on Practice: Scoping Review

期刊

JMIR MEDICAL INFORMATICS
卷 10, 期 2, 页码 -

出版社

JMIR PUBLICATIONS, INC
DOI: 10.2196/32695

关键词

practice analytics dashboards; data visualization; reflective practice; professional learning; mobile phone

资金

  1. Digital Health Cooperative Research Centre (DHCRC)
  2. Australian Government's Cooperative Research Centers Program
  3. DHCRC project grant

向作者/读者索取更多资源

This scoping review summarizes the literature on dashboards based on routinely collected clinical indicator data. While common data visualization techniques and clinical indicators were used across studies, there was diversity in the design of the dashboards and their evaluation. There was a lack of detail regarding interface features to support clinicians in making sense of and reflecting on their personal performance data.
Background: There is an increasing interest in using routinely collected eHealth data to support reflective practice and long-term professional learning. Studies have evaluated the impact of dashboards on clinician decision-making, task completion time, user satisfaction, and adherence to clinical guidelines. Objective: This scoping review aims to summarize the literature on dashboards based on patient administrative, medical, and surgical data for clinicians to support reflective practice. Methods: A scoping review was conducted using the Arksey and O'Malley framework. A search was conducted in 5 electronic databases (MEDLINE, Embase, Scopus, ACM Digital Library, and Web of Science) to identify studies that met the inclusion criteria. Study selection and characterization were performed by 2 independent reviewers (BB and CP). One reviewer extracted the data that were analyzed descriptively to map the available evidence. Results: A total of 18 dashboards from 8 countries were assessed. Purposes for the dashboards were designed for performance improvement (10/18, 56%), to support quality and safety initiatives (6/18, 33%), and management and operations (4/18, 22%). Data visualizations were primarily designed for team use (12/18, 67%) rather than individual clinicians (4/18, 22%). Evaluation methods varied among asking the clinicians directly (11/18, 61%), observing user behavior through clinical indicators and use log data (14/18, 78%), and usability testing (4/18, 22%). The studies reported high scores on standard usability questionnaires, favorable surveys, and interview feedback. Improvements to underlying clinical indicators were observed in 78% (7/9) of the studies, whereas 22% (2/9) of the studies reported no significant changes in performance. Conclusions: This scoping review maps the current literature landscape on dashboards based on routinely collected clinical indicator data. Although there were common data visualization techniques and clinical indicators used across studies, there was diversity in the design of the dashboards and their evaluation. There was a lack of detail regarding the design processes documented for reproducibility. We identified a lack of interface features to support clinicians in making sense of and reflecting on their personal performance data.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据