Local Interpretations for Explainable Natural Language Processing: A Survey

Siwen Luo, Hamish Ivison, Soyeon Caren Han, Josiah Poon

Research output: Contribution to journalReview articlepeer-review

12 Citations (Scopus)

Abstract

As the use of deep learning techniques has grown across various fields over the past decade, complaints about the opaqueness of the black-box models have increased, resulting in an increased focus on transparency in deep learning models. This work investigates various methods to improve the interpretability of deep neural networks for Natural Language Processing (NLP) tasks, including machine translation and sentiment analysis. We provide a comprehensive discussion on the definition of the term interpretability and its various aspects at the beginning of this work. The methods collected and summarised in this survey are only associated with local interpretation and are specifically divided into three categories: (1) interpreting the model’s predictions through related input features; (2) interpreting through natural language explanation; (3) probing the hidden states of models and word representations.

Original languageEnglish
Article number232
Number of pages36
JournalACM Computing Surveys
Volume56
Issue number9
Early online date25 Apr 2024
DOIs
Publication statusPublished - 31 Oct 2024

Fingerprint

Dive into the research topics of 'Local Interpretations for Explainable Natural Language Processing: A Survey'. Together they form a unique fingerprint.

Cite this