TY - GEN
T1 - CrisisHateMM
T2 - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition
AU - Bhandari, Aashish
AU - Shah, Siddhant B
AU - Thapa, Surendrabikram
AU - Naseem, Usman
AU - Nasim, Mehwish
PY - 2023/8/14
Y1 - 2023/8/14
N2 - Text-embedded images are frequently used on social media to convey opinions and emotions, but they can also be a medium for disseminating hate speech, propaganda, and extremist ideologies. During the Russia-Ukraine war, both sides used text-embedded images extensively to spread propaganda and hate speech. To aid in moderating such content, this paper introduces CrisisHateMM, a novel multimodal dataset of over 4,700 text-embedded images from the Russia-Ukraine conflict, annotated for hate and non-hate speech. The hate speech is annotated for directed and undirected hate speech, with directed hate speech further annotated for individual, community, and organizational targets. We benchmark the dataset using unimodal and multimodal algorithms, providing insights into the effectiveness of different approaches for detecting hate speech in text-embedded images. Our results show that multimodal approaches outperform unimodal approaches in detecting hate speech, highlighting the importance of combining visual and textual features. This work provides a valuable resource for researchers and practitioners in automated content moderation and social media analysis. The CrisisHateMM dataset and codes are made publicly available at https://github.com/aabhandari/CrisisHateMM.
AB - Text-embedded images are frequently used on social media to convey opinions and emotions, but they can also be a medium for disseminating hate speech, propaganda, and extremist ideologies. During the Russia-Ukraine war, both sides used text-embedded images extensively to spread propaganda and hate speech. To aid in moderating such content, this paper introduces CrisisHateMM, a novel multimodal dataset of over 4,700 text-embedded images from the Russia-Ukraine conflict, annotated for hate and non-hate speech. The hate speech is annotated for directed and undirected hate speech, with directed hate speech further annotated for individual, community, and organizational targets. We benchmark the dataset using unimodal and multimodal algorithms, providing insights into the effectiveness of different approaches for detecting hate speech in text-embedded images. Our results show that multimodal approaches outperform unimodal approaches in detecting hate speech, highlighting the importance of combining visual and textual features. This work provides a valuable resource for researchers and practitioners in automated content moderation and social media analysis. The CrisisHateMM dataset and codes are made publicly available at https://github.com/aabhandari/CrisisHateMM.
UR - http://www.scopus.com/inward/record.url?scp=85170826982&partnerID=8YFLogxK
UR - https://github.com/aabhandari/CrisisHateMM
U2 - 10.1109/CVPRW59228.2023.00193
DO - 10.1109/CVPRW59228.2023.00193
M3 - Conference paper
SN - 979-8-3503-0250-9
T3 - IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
SP - 1994
EP - 2003
BT - Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
PB - IEEE, Institute of Electrical and Electronics Engineers
Y2 - 18 June 2023 through 22 June 2023
ER -