default search action
xAI 2024: Valletta, Malta
- Luca Longo, Weiru Liu, Grégoire Montavon:
Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI-2024), Valletta, Malta, July 17-19, 2024. CEUR Workshop Proceedings 3793, CEUR-WS.org 2024 - Preface.
Late-Breaking Work
- Raphael Wallsberger, Ricardo Knauer, Stephan Matzka:
Explainable Artificial Intelligence Beyond Feature Attributions: The Validity and Reliability of Feature Selection Explanations. 1-8 - Paolo Giudici, Parvati Neelakantan:
Shapley values and fairness. 9-16 - Michael Erol Schaffer, Lutz Terfloth, Carsten Schulte, Heike M. Buhl:
Perception and Consideration of the Explainees' Needs for Satisfying Explanations. 17-24 - Arjun Vinayak Chikkankod, Luca Longo:
A proposal for improving EEG microstate generation via interpretable deep clustering with convolutional autoencoders. 25-32 - Yao Rong, David Scheerer, Enkelejda Kasneci:
Faithful Attention Explainer: Verbalizing Decisions Based on Discriminative Features. 33-40 - Gulsum Alicioglu, Bo Sun:
Use Bag-of-Patterns Approach to Explore Learned Behaviors of Reinforcement Learning. 41-48 - Zhechang Xue, Yiran Huang, Hongnan Ma, Michael Beigl:
Generate Explanations for Time-series classification by ChatGPT. 49-56 - Paolo Giudici, Giulia Vilone:
Model agnostic calibration of image classifiers. 57-64 - Ephrem Tibebe Mekonnen, Luca Longo, Pierpaolo Dondio:
Interpreting Black-Box Time Series Classifiers using Parameterised Event Primitives. 65-72 - Lisa Anita De Santi, Jörg Schlötterer, Meike Nauta, Vincenzo Positano, Christin Seifert:
Patch-based Intuitive Multimodal Prototypes Network (PIMPNet) for Alzheimer's Disease classification. 73-80 - Gabriele Dominici, Pietro Barbiero, Francesco Giannini, Martin Gjoreski, Marc Langheinrich:
AnyCBMs: How to Turn Any Black Box into a Concept Bottleneck Model. 81-88 - Amal Saadallah:
Online Explainable Ensemble of Tree Models Pruning for Time Series Forecasting. 89-96 - Leon Hegedic, Luka Hobor, Nikola Maric, Martin Ante Rogosic, Mario Brcic:
Towards Mechanistic Interpretability for Autoencoder compression of EEG signals. 97-104 - Luca Macis, Marco Tagliapietra, Alessandro Castelnovo, Daniele Regoli, Greta Greco, Andrea Claudio Cosentini, Paola Pisano, Edoardo Carroccetto:
Integrating XAI for Predictive Conflict Analytics. 105-112 - Yuwei Liu, Chen Dan, Anubhav Bhatti, Bingjie Shen, Divij Gupta, Suraj Parmar, San Lee:
Interpretable Vital Sign Forecasting with Model Agnostic Attention Maps. 113-120 - Tomas Bueno Momcilovic, Beat Buesser, Giulio Zizzo, Mark Purcell, Dian Balta:
Towards Assurance of LLM Adversarial Robustness using Ontology-Driven Argumentation. 121-128 - Andrzej Porebski:
Looking for the Right Paths to Use XAI in the Judiciary. Which Branches of Law Need Inherently Interpretable Machine Learning Models and Why? 129-136 - Urja Pawar, Ruairi O'Reilly, Christian Beder, Donna O'Shea:
The Dynamics of Explainability: Diverse Insights from SHAP Explanations using Neighbourhoods. 137-144 - Vladimir Marochko, Luca Longo:
Enhancing the analysis of the P300 event-related potential with integrated gradients on a convolutional neural network trained with superlets. 145-152 - Eduard Barbu, Marharyta Domnich, Raul Vicente, Nikos Sakkas:
Exploring Commonalities in Explanation Frameworks: A Multi-Domain Survey Analysis. 153-160 - Nicholas Pochinkov, Ben Pasero, Skylar Shibayama:
Investigating Neuron Ablation in Attention Heads: The Case for Peak Activation Centering. 161-168 - Hamza Zidoum, Ali AlShareedah, Aliya Al-Ansari, Batool Al-Lawati, Sumaya Al-Sawafi:
CatBoost model with self-explanatory capabilities for predicting SLE in OMAN population. 169-176 - Ladan Gholami, Pietro Ducange, Pietro Cassarà, Alberto Gotta:
Channel Modeling for Millimeter-Wave UAV Communication based on Explainable Generative Neural Network. 177-184 - Antonio Mastroianni, Sibylle D. Sager-Müller:
Validation of ML Models from the Field of XAI for Computer Vision in Autonomous Driving. 185-192 - Zohaib Shahid, Yogachandran Rahulamathavan, Safak Dogan:
Second Glance: A Novel Explainable AI to Understand Feature Interactions in Neural Networks using Higher-Order Partial Derivatives. 193-200 - Siri Padmanabhan Poti, Christopher J. Stanton:
Mediating Explainer for Human Autonomy Teaming. 201-208 - Ondrej Lukás, Sebastian García:
Exploring Agent Behaviors in Network Security through Trajectory Clustering. 209-216 - Giovanni Bocchi, Patrizio Frosini, Alessandra Micheletti, Alessandro Pedretti, Gianluca Palermo, Davide Gadioli, Carmen Gratteri, Filippo Lunghini, Andrea Rosario Beccari, Anna Fava, Carmine Talarico:
A geometric XAI approach to protein pocket detection. 217-224 - Felix Liedeker, Christoph Düsing, Marcel Nieveler, Philipp Cimiano:
An Empirical Investigation of Users' Assessment of XAI Explanations: Identifying the Sweet Spot of Explanation Complexity and Value. 225-232 - André Artelt, Andreas Gregoriades:
A Two-Stage Algorithm for Cost-Efficient Multi-instance Counterfactual Explanations. 233-240 - Finn Schürmann, Sibylle D. Sager-Müller:
Interactive xAI-dashboard for Semantic Segmentation. 241-248 - Mohammad Naiseh, Catherine Webb, Timothy J. Underwood, Gopal Ramchurn, Zoë Walters, Navamayooran Thavanesan, Ganesh Vigneswaran:
XAI for Group-AI Interaction: Towards Collaborative and Inclusive Explanations. 249-256 - Riccardo Crupi, Daniele Regoli, Alessandro Damiano Sabatino, Immacolata Marano, Massimiliano Brinis, Luca Albertazzi, Andrea Cirillo, Andrea Claudio Cosentini:
Unraveling Anomalies: Explaining Outliers with DTOR. 257-264
Demos
- Romain Xu-Darme, Aymeric Varasse, Alban Grastien, Julien Girard-Satabin, Zakaria Chihani:
CaBRNet, An Open-Source Library For Developing And Evaluating Case-Based Reasoning Models. 265-272 - Van Bach Nguyen, Jörg Schlötterer, Christin Seifert:
XAgent: A Conversational XAI Agent Harnessing the Power of Large Language Models. 273-280 - Susanne Dandl, Marc Becker, Bernd Bischl, Giuseppe Casalicchio, Ludwig Bothmann:
mlr3summary: Concise and interpretable summaries for machine learning models. 281-288 - Gabriele Sarti, Nils Feldhus, Jirui Qi, Malvina Nissim, Arianna Bisazza:
Democratizing Advanced Attribution Analyses of Generative Language Models with the Inseq Toolkit. 289-296 - Jérôme Guzzi, Alessandro Giusti:
Human-in-the-loop testing of the explainability of robot navigation algorithms in extended reality. 297-304 - Claudio Muselli, Damiano Verda, Enrico Ferrari, Claire Thomas Gaggiotti, Marco Muselli:
Rulex Platform: leveraging domain knowledge and data-driven rules to support decisions in the fintech sector through eXplainable AI models. 305-312 - Marta Caro-Martínez, Anne Liret, Belén Díaz-Agudo, Juan A. Recio-García, Jesus M. Darias, Nirmalie Wiratunga, Anjana Wijekoon, Kyle Martin, Ikechukwu Nkisi-Orji, David Corsar, Chamath Palihawadana, Craig Pirie, Derek G. Bridge, Preeja Pradeep, Bruno Fleisch:
Building Personalised XAI Experiences Through iSee: a Case-Based Reasoning-Driven Platform. 313-320
Doctoral Consortium
- Laura Bergomi:
Fostering Human-AI interaction: development of a Clinical Decision Support System enhanced by eXplainable AI and Natural Language Processing. 321-328 - Danilo Danese:
Optimizing Synthetic Data from Scarcity: Towards Meaningful Data Generation in High-Dimensional Low-Sample Size Domains. 329-336 - Oleksandr Davydko:
Assesing the Interpretability of the Statistical Radiomic Features via Image Saliency Maps in Medical Image Classification Tasks. 337-344 - Regina De Brito Duarte:
Explainable AI as a Crucial Factor for Improving Human-AI Decision-Making Processes. 345-352 - Renate Ernst:
Counterfactual generating Variational Autoencoder for Anomaly Detection. 353-360 - Fatima Ezzeddine:
Privacy Implications of Explainable AI in Data-Driven Systems. 361-368 - Rokas Gipiskis:
XAI-driven Model Improvements in Interpretable Image Segmentation. 369-376 - Iris Heerlien:
Design Guidelines for XAI in the Healthcare Domain. 377-384 - Annemarie Jutte:
Explainable MLOps: A Methodological Framework for the Development of Explainable AI in Practice. 385-392 - Marija Kopanja:
A Novel Model-Agnostic xAI Method Guided by Cost-Sensitive Tree Models and Argumentative Decision Graphs. 393-400 - Stefanie Krause:
Explainable Artificial Intelligence and Reasoning in the Context of Large Neural Network Models. 401-408 - Lea Louisa Kronziel:
Artificial Representative Trees as Interpretable Surrogates for Random Forests. 409-416 - Pedro M. Marques:
Can Reduction of Bias Decrease the Need for Explainability? Working with Simplified Models to Understand Complexity. 417-424 - Philip Naumann:
Towards XAI for Optimal Transport. 425-432 - Lenka Tetková:
Knowledge Graphs and Explanations for Improving Detection of Diseases in Images of Grains. 433-440 - Victor Toscano-Durán:
Topological Data Analysis for Trustworthy AI. 441-448 - Nils Wenninghoff:
Explainable Deep Reinforcement Learning through Introspective Explanations. 449-456 - Sargam Yadav:
Explainable and Debiased Misogyny Identification In Code-Mixed Hinglish using Artificial Intelligence Models. 457-464
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.