default search action
EWAF 2023: Winterthur, Switzerland
- José M. Álvarez, Alessandro Fabris, Christoph Heitz, Corinna Hertweck, Michele Loi, Meike Zehlike:
Proceedings of the 2nd European Workshop on Algorithmic Fairness, Winterthur, Switzerland, June 7th to 9th, 2023. CEUR Workshop Proceedings 3442, CEUR-WS.org 2023 - Preface.
Computer Science Track
- Andreas Nikolaos Athanasopoulos, Amanda Belfrage, David Berg Marklund, Christos Dimitrakakis:
Approximate Inference for the Bayesian Fairness Framework. - Joachim Baumann, Alessandro Castelnovo, Riccardo Crupi, Nicole Inverardi, Daniele Regoli:
An Open-Source Toolkit to Generate Biased Datasets. - Joachim Baumann, Anikó Hannák, Christoph Heitz:
Fair Machine Learning Through Post-processing: The Case of Predictive Parity. - Giorgian Borca-Tasciuc, Xingzhi Guo, Stanley Bak, Steven Skiena:
Provable Fairness for Neural Network Models Using Formal Verification. - Adrian Byrne, Ivan Caffrey, Quan Le:
Towards a Framework for the Global Assessment of Sensitive Attribute Bias Within Binary Classification Algorithms. - Mattia Cerrato, Alesia Vallenas Coronel, Marius Köppel:
The Case for Correctability in Fair Machine Learning. - Alessandro Fabris, Fabio Giachelle, Alberto Piva, Gianmaria Silvello, Gian Antonio Susto:
A Search Engine for Algorithmic Fairness Datasets. - Siamak Ghodsi, Eirini Ntoutsi:
Affinity Clustering Framework for Data Debiasing Using Pairwise Distribution Discrepancy. - Sofie Goethals, David Martens, Toon Calders:
Explainability Methods to Detect and Measure Discrimination in Machine Learning Models. - Sakina Hansen, Joshua R. Loftus:
Model-Agnostic Auditing: A Lost Cause? - Corinna Hertweck, Joachim Baumann, Michele Loi, Christoph Heitz:
FairnessLab: A Consequence-Sensitive Bias Audit and Mitigation Toolkit. - Fanny Jourdan, Ronan Pons, Nicholas Asher, Jean-Michel Loubes, Laurent Risser:
Is a Fairness Metric Score Enough to Assess Discrimination Biases in Machine Learning? - Fanny Jourdan, Titon Tshiongo Kaninku, Nicholas Asher, Jean-Michel Loubes, Laurent Risser:
Breaking Bias: How Optimal Transport Can Help to Tackle Gender Biases in NLP Based Job Recommendation Systems? - Bogdan Kulynych, Hsiang Hsu, Carmela Troncoso, Flávio P. Calmon:
Arbitrary Decisions Are a Hidden Cost of Differentially Private Training. - Joshua R. Loftus:
It's About Time: Counterfactual Fairness and Temporal Depth. - Alex Loosley, Amrollah Seifoddini, Alessandro Canopoli, Meike Zehlike:
Body Measurement Prediction Fairness. - Nicolò Pagan, Joachim Baumann, Ezzat Elokda, Giulia De Pasquale, Saverio Bolognani, Anikó Hannák:
Closing the Loop: Feedback Loops and Biases in Automated Decision-Making. - Evaggelia Pitoura:
Pagerank Fairness in Networks. - Lorenzo Porcaro, Carlos Castillo, Emilia Gómez, João Vinagre:
Fairness and Diversity in Information Access Systems. - Samuel Teuber, Bernhard Beckert:
Formally Verified Algorithmic Fairness Using Information-Flow Tools. - Charles Wan, Leid Zejnilovic, Susana Lavado:
How Differential Robustness Creates Disparate Impact: A European Case Study.
Philosophy Track
- Joachim Baumann, Corinna Hertweck, Michele Loi, Christoph Heitz:
Unification, Extension, and Interpretation of Group Fairness Metrics for ML-Based Decision-Making. - Sander Beckers, Hana Chockler, Joseph Y. Halpern:
A Causal Analysis of Harm. - Lou Therese Brandner, Philipp Mahlow, Anna Wilken, Annika Wölke, Hazar Harmouch, Simon David Hirsbrunner:
How Data Quality Determines AI Fairness: The Case of Automated Interviewing. - Marcello Di Bello, Nicolò Cangiotti, Michele Loi:
Classification Parity, Causal Equal Protection and Algorithmic Fairness. - Matteo Fabbri:
Social Influence for Societal Interest: A Pro-Ethical Framework for Improving Human Decision-Making Through Multi-Stakeholder Recommender Systems. - Andrea Ferrario:
Through the Sands of Time: A Reliabilistic Account of Justified Credence in the Trustworthiness of AI Systems. - Camilla Quaresmini, Eugenia Villa, Valentina Breschi, Viola Schiaffonati, Mara Tanelli:
Qualification and Quantification of Fairness for Sustainable Mobility Policies. - Otto Sahlgren:
Using Fairness Metrics as Decision-Making Procedures: Algorithmic Fairness and the Problem of Action-Guidance. - Teresa Scantamburlo, Giovanni Grandi:
A 'Little Ethics' for Algorithmic Decision-Making. - Vincent J. Straub, Deborah Morgan, Youmna Hashem, John Francis, Saba Esnaashari, Jonathan Bright:
A Multidomain Relational Framework to Guide Institutional AI Research and Adoption. - Bauke Wielinga:
Complex Equality and Algorithmic Fairness: A Social Goods Approach to Make Statistical Fairness Metrics Less Abstract. - Sebastian Zezulka:
Fairness After Intervention: Towards a Theory of Substantial Fairness for Machine Learning.
Social Sciences Track
- Sofia Jaime, Christoph Kern:
Ethnic Classifications in Algorithmic Decision-Making Processes. - Christoph Kern, Ruben L. Bach, Hannah Mautner, Frauke Kreuter:
When Small Decisions Have Big Impact: Fairness Implications of Algorithmic Profiling Schemes. - Oriane Pierrès, Alireza Darvishy, Markus Christen:
Artificial Intelligence in Higher Education: Ethical Concerns for Students With Disabilities. - Jan Simson, Florian Pfisterer, Christoph Kern:
What If? Using Multiverse Analysis to Evaluate the Influence of Model Design Decisions on Algorithmic Fairness. - Laura State, Miriam Fahimi:
Careful Explanations: A Feminist Perspective on XAI. - Chiara Ullstein, Severin Engelmann, Orestis Papakyriakopoulos, Jens Grossklags:
A Reflection on How Cross-Cultural Perspectives on the Ethics of Facial Analysis AI Can Inform EU Policymaking.
Law & Policy Track
- Ahmet Bilal Aytekin:
Algorithmic Bias in the Context of European Union Anti-Discrimination Directives. - Eugenia Cacciatori, Enzo Fenoglio, Emre Kazim:
Living with Opaque Technologies: Insights for AI from Digital Simulations. - Gabriele Carovano, Alexander Meinke:
Improving Fairness and Cybersecurity in the Artificial Intelligence Act. - Matteo Fabbri:
From Digital Nudging to Users' Self-Determination: Explainability as a Framework for the Effective Implementation of the Transparency Requirements for Recommender Systems Set by the Digital Services Act of the European Union. - Lukas Hondrich, Hannah Ruschemeier:
Addressing Automation Bias through Verifiability. - Jan-Laurin Müller:
Fairness in Machine Learning as 'Algorithmic Positive Action'. - Carlotta Rigotti, Alexandre R. Puttick, Eduard Fosch-Villaronga, Mascha Kurpicz-Briki:
Mitigating Diversity Biases of AI in the Labor Market. - Nicolas Scharowski, Michaela Benk, Swen J. Kühne, Léane Wettstein, Florian Brühlmann:
Certification Labels for Trustworthy AI. - Laura State, Alejandra Bringas Colmenarejo, Andrea Beretta, Salvatore Ruggieri, Franco Turini, Stephanie Law:
The Explanation Dialogues: Understanding How Legal Experts Reason About XAI Methods. - Hilde J. P. Weerts, Raphaële Xenidis, Fabien Tarissan, Henrik Palmer Olsen, Mykola Pechenizkiy:
Algorithmic Unfairness Through the Lens of EU Non-Discrimination Law. - Malwina Anna Wojcik:
Assessing the Legality of Using the Category of Race and Ethnicity in Clinical Algorithms - the EU Anti-Discrimination Law Perspective. - Yasaman Yousefi, Lisa Koutsoviti Koumeri, Magali Legast, Christoph Schommer, Koen Vanhoof, Axel Legay:
Compatibility of Fairness Metrics With EU Non-Discrimination Law: A Legal and Technical Case Study.
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.