%0 Conference Proceedings %T Explaining Recurrent Neural Network Predictions in Sentiment Analysis %A Arras, Leila %A Montavon, Grégoire %A Müller, Klaus-Robert %A Samek, Wojciech %Y Balahur, Alexandra %Y Mohammad, Saif M. %Y van der Goot, Erik %S Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis %D 2017 %8 September %I Association for Computational Linguistics %C Copenhagen, Denmark %F arras-etal-2017-explaining %X Recently, a technique called Layer-wise Relevance Propagation (LRP) was shown to deliver insightful explanations in the form of input space relevances for understanding feed-forward neural network classification decisions. In the present work, we extend the usage of LRP to recurrent neural networks. We propose a specific propagation rule applicable to multiplicative connections as they arise in recurrent network architectures such as LSTMs and GRUs. We apply our technique to a word-based bi-directional LSTM model on a five-class sentiment prediction task, and evaluate the resulting LRP relevances both qualitatively and quantitatively, obtaining better results than a gradient-based related method which was used in previous work. %R 10.18653/v1/W17-5221 %U https://aclanthology.org/W17-5221 %U https://doi.org/10.18653/v1/W17-5221 %P 159-168