Interactions between faces and visual context in emotion perception: A meta-analysis

Ben A. Steward, Paige Mewton, Romina Palermo, Amy Dawel

Research output: Contribution to journalReview articlepeer-review

Abstract

Long-standing theories in emotion perception, such as basic emotion theory, argue that we primarily perceive others' emotions through facial expressions. However, compelling evidence shows that other visual contexts, such as body posture or scenes, significantly influence the emotions perceived from faces and vice versa. We used meta-analysis to synthesise and quantify these effects for the first time, testing if faces have primacy over context after accounting for key moderators. Namely, the emotional congruency and clarity of the stimuli. A total of 1,020 effect sizes from 37 articles and 3,198 participants were meta-analysed using three-level mixed-effects models with robust variance estimation. Both visual context and faces were found to have large effects on emotion labelling for the other (g(av) > 1.23). Effects were larger when visual context and faces signalled different (incongruent) rather than the same (congruent) emotions and congruent effects were moderated by how clearly stimuli signalled the target emotion. When these factors were accounted for, faces were no more influential in altering emotion labelling than body postures or body postures with scenes. The findings of this review clearly evidence the integrative nature of emotion perception. Importantly, however, they also highlight that the influence of different emotion signals depends on how clearly they signal an emotion. Future research needs to account for emotional congruency and signal clarity.
Original languageEnglish
Article number104549
Number of pages17
JournalPsychonomic Bulletin & Review
DOIs
Publication statusE-pub ahead of print - 3 Apr 2025

Fingerprint

Dive into the research topics of 'Interactions between faces and visual context in emotion perception: A meta-analysis'. Together they form a unique fingerprint.

Cite this