Abstract
The increasing importance and popularity of online knowledge exchanging platforms have resulted in an extensive body of literature aimed at quality evaluation of knowledge. However, the existing evaluation tools are overly relying on popular vote to decide the quality. As the capabilities of AI (Artificial Intelligence) continue to improve, AI may become an effective tool for manipulating and misleading humans, making popular-vote-based evaluations less reliable. Henceforth, in this study, we deem it is important to design a systematic quality-assessment framework to understand people's knowledge-evaluation processes. In doing so, we can examine if the processes are significantly different between AI-generated knowledge contents and human-contributed contents. We draw on IAM (Information Adoption Model) to build the framework. To increase the validity of the study, we also examine the impact of knowledge type and characteristics. With this study, we expect to provide a new valid use case that extends the IAM by considering the AI factor together with the potential contingency effects caused by knowledge types and characteristics. Finally, we believe this study will open a route to reliable, accurate, readily extensible, and adjustable framework.
| Original language | English |
|---|---|
| Publication status | Published - 2022 |
| Externally published | Yes |
| Event | 2022 Workshop on Information Systems in Asia Pacific - Copenhagen, Denmark Duration: 10 Dec 2022 → 10 Dec 2022 |
Conference
| Conference | 2022 Workshop on Information Systems in Asia Pacific |
|---|---|
| Country/Territory | Denmark |
| City | Copenhagen |
| Period | 10/12/22 → 10/12/22 |