4.3 Article

Extracting brand information from social networks: Integrating image, text, and social tagging data

期刊

出版社

ELSEVIER
DOI: 10.1016/j.ijresmar.2018.08.002

关键词

Brand associative network; Image classification; Instagram; Sentiment analysis; Social tag; User-generated content

类别

向作者/读者索取更多资源

Images are an essential feature of many social networking services, such as Facebook, Instagram, and Twitter. Through brand-related images, consumers communicate about brands with each other and link the brand with rich contextual and consumption experiences. However, previous articles in marketing research have concentrated on deriving brand information from textual user-generated content and have largely not considered brand-related images. The analysis of brand-related images yields at least two challenges. First, the content displayed in images is heterogeneous, and second, images rarely show what users think and feel in or about the situations displayed. To meet these challenges, this article presents a two-step approach that involves collecting, labeling, clustering, aggregating, mapping, and analyzing brand-related user-generated content. The collected data are brand-related images, caption texts, and social tags posted on Instagram. Clustering images labeled via Google Cloud Vision API enabled to identify heterogeneous contents (e.g. products) and contexts (e.g. situations) that consumers create content about. Aggregating and mapping the textual information for the resulting image clusters in the form of associative networks empowers marketers to derive meaningful insights by inferring what consumers think and feel about their brand regarding different contents and contexts. (C) 2018 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.3
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据