期刊
COGNITIVE COMPUTATION
卷 13, 期 2, 页码 241-260出版社
SPRINGER
DOI: 10.1007/s12559-019-09695-3
关键词
Personality; Gender; Natural language processing; Computer vision; Computational social science
资金
- National Science Foundation [1344257]
- John Templeton Foundation [48503]
- DARPA [HR001117S0026-AIDA-FP-045]
- Michigan Institute for Data Science
This research examines the relationship between a person's demographic/psychological traits and self-identity images and captions. By automatically extracting features from images and captions, visual and textual properties that are related to individual differences were identified. The study demonstrates that a multimodal predictive approach outperforms purely visual and textual methods.
This work explores the relationship between a person's demographic/psychological traits (e.g., gender and personality) and self-identity images and captions. We use a dataset of images and captions provided by N approximate to 1350 individuals, and we automatically extract features from both the images and captions. We identify several visual and textual properties that show reliable relationships with individual differences between participants. The automated techniques presented here allow us to draw interesting conclusions from our data that would be difficult to identify manually, and these techniques are extensible to other large datasets. Additionally, we consider the task of predicting gender and personality using both single modality features and multimodal features. We show that a multimodal predictive approach outperforms purely visual methods and purely textual methods. We believe that our work on the relationship between user characteristics and user data has relevance in online settings, where users upload billions of images each day.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据