Journal
INFORMATION SYSTEMS FRONTIERS
Volume 24, Issue 4, Pages 1265-1285Publisher
SPRINGER
DOI: 10.1007/s10796-021-10122-y
Keywords
Text analytics; Sentiment analysis; Quality management; Supervised learning; Unsupervised learning; Business intelligence
Funding
- Thammasat University in the form of the Bualuang ASEAN Fellowship
Ask authors/readers for more resources
The paper evaluates various methods for detecting defect-related discussion in online reviews and finds that supervised learning techniques outperform other text analytic techniques in cross-category analysis, especially when confined to a single category of study.
Online reviews contain many vital insights for quality management, but the volume of content makes identifying defect-related discussion difficult. This paper critically assesses multiple approaches for detecting defect-related discussion, ranging from out-of-the-box sentiment analyses to supervised and unsupervised machine-learned defect terms. We examine reviews from 25 product and service categories to assess each method's performance. We examine each approach across the broad cross-section of categories as well as when tailored to a singular category of study. Surprisingly, we found that negative sentiment was often a poor predictor of defect-related discussion. Terms generated with unsupervised topic modeling tended to correspond to generic product discussions rather than defect-related discussion. Supervised learning techniques outperformed the other text analytic techniques in our cross-category analysis, and they were especially effective when confined to a single category of study. Our work suggests a need for category-specific text analyses to take full advantage of consumer-driven quality intelligence.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available