4.5 Article

Can a computer outfake a human?

Journal

PERSONALITY AND INDIVIDUAL DIFFERENCES
Volume 217, Issue -, Pages -

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.paid.2023.112434

Keywords

Personality; Single stimulus; Forced choice; Generative AI; Large language models

Ask authors/readers for more resources

The use of generative AI large language models (LLMs) presents challenges in detecting deception in personality tests. The study finds that GPT-4 performs better in personality assessments and highlights the need for further research to address the challenges posed by advancing AI technology in testing.
Faking on personality tests continues to be a challenge in hiring practices, and with the increased accessibility to free, generative AI large language models (LLM), the difference between human and algorithmic responses is difficult to distinguish. Four LLMs-GPT-3.5, Jasper, Google Bard, and GPT-4 were prompted to provide ideal responses to personality measures, specific to a provided job description. Responses collected from the LLM's were compared to a previously collected student population sample who were also directed to respond in a ideal fashion to the same job description. Overall, score comparisons indicate the superior performance of GPT-4 on both the single stimulus and forced-choice personality assessments and reinforce the need to consider more advanced options in preventing faking on personality assessments. Additionally, results from this study indicate the need for future research, especially as generative AI improves and becomes more accessible to a range of candidates.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available