Towards systematic evaluation of user-generated vs AI-generated contents in online knowledge exchange communities

Research output: Contribution to conferenceOtherpeer-review

Abstract

The increasing importance and popularity of online knowledge exchanging platforms have resulted in an extensive body of literature aimed at quality evaluation of knowledge. However, the existing evaluation tools are overly relying on popular vote to decide the quality. As the capabilities of AI (Artificial Intelligence) continue to improve, AI may become an effective tool for manipulating and misleading humans, making popular-vote-based evaluations less reliable. Henceforth, in this study, we deem it is important to design a systematic quality-assessment framework to understand people's knowledge-evaluation processes. In doing so, we can examine if the processes are significantly different between AI-generated knowledge contents and human-contributed contents. We draw on IAM (Information Adoption Model) to build the framework. To increase the validity of the study, we also examine the impact of knowledge type and characteristics. With this study, we expect to provide a new valid use case that extends the IAM by considering the AI factor together with the potential contingency effects caused by knowledge types and characteristics. Finally, we believe this study will open a route to reliable, accurate, readily extensible, and adjustable framework.
Original languageEnglish
Publication statusPublished - 2022
Externally publishedYes
Event2022 Workshop on Information Systems in Asia Pacific - Copenhagen, Denmark
Duration: 10 Dec 202210 Dec 2022

Conference

Conference2022 Workshop on Information Systems in Asia Pacific
Country/TerritoryDenmark
CityCopenhagen
Period10/12/2210/12/22

Fingerprint

Dive into the research topics of 'Towards systematic evaluation of user-generated vs AI-generated contents in online knowledge exchange communities'. Together they form a unique fingerprint.

Cite this