Ethical and Interpretable
Representations in Bio-Medicine
Aim and Scope
In the bio-medical domain, many complex computational tasks require advanced, novel, and far from trivial representations, to either mitigate or provide a clearer overview of the problem under investigation. On top of that, the rapid adoption of Machine Learning, Generative and Deep Learning models in high-risk domains means that representational choices directly influence not only the accuracy of the outcomes but also the ethical accountability and safety of the resulting AI systems.
Although several techniques – ranging from generative and self-adaptive representations to search space manipulation – can improve the efficiency of optimization methods and the accuracy of machine learning models, they can also hamper transparency and make the decision process more convoluted, obscure the algorithmic bias, reduce the interpretability, and complicate legal accountability.
The objective of this special session is to bring together technical experts, ethical, and legal experts in a dedicated forum to discuss the role of representation in Computational Intelligence. We invite researchers who focus on representation as a central mechanism for promoting trustworthy, fair, and interpretable Computation Intelligence. We welcome both technical papers addressing the human, and legal dimensions of AI, and non-techincal papers focusing on the ethical side of leveraging CI in the bio-medical domain.
The Vision
Motivation for the Special Session
"We aim to gather in the same room both computational experts and ethical, legal and humanitarian scholars, fostering a comprehensive discussion with many different points of view."
This Special Session encompasses two of the most growing topics of Computational Intelligence applied in biomedicine: the need for ethical and fair use of interpretable methods and how to improve the efficacy, interpretability and fairness of such methods. We believe that this kind of discussion will lead to the development of a network that will grow in the next few years, possibly making a recurrent event at each IEEE CIBCB conference.
Topics of Interest
Session Chairs
Dr. Daniele M. Papetti
University of Milano-Bicocca
IEEE CIS ARBM TF Chair
Daniele M. Papetti is an assistant professor in Computer Science at University of Milano-Bicocca. He specializes in Artificial Intelligence and global optimization problems. He is chair of the IEEE Computational Intelligence Society Task Force on Advanced Representation in Biological and Medical Search and Optimization, a topic which was the focus of his PhD thesis and research. Nowadays, his main expertise focuses on applying Artificial Intelligence and machine learning in healthcare to develop clinical decision support tools.
Matteo Grazioso
Corresponding Special Session Organizer Tutorial OrganizerPrimary Contact
Ca’ Foscari University of Venice
IEEE CIS ARBM TF Member
Matteo Grazioso is a Ph.D. Student in Computer Science at Ca’ Foscari University of Venice (Department of Environmental Sciences, Informatics and Statistics) His primary research focuses on Interpretable and Fairness-preserving AI for High-Risk applications, where he explores the design of trustworthy AI systems, ensuring that Machine Learning models in high-stakes domains are both transparent and ethically aligned. Matteo’s research interests lie at the intersection of Computational Intelligence, Machine Learning, Interpretable AI, High-Performance Computing, Evolutionary Drug Discovery, and Optimization Algorithms. He holds a Master’s Degree in Computer Science – Artificial Intelligence and Data Engineering –, graduated summa cum laude from Ca’ Foscari University of Venice, where he also served as a Research Grant Holder. In parallel with his doctoral studies, Matteo serves as a Scientific Associate at the Italian National Institute of Nuclear Physics (INFN), Milano Bicocca Division, and is a Visiting Ph.D. Student at the University of Milano-Bicocca (Department of Informatics, Systems and Communication). He is an active member of the IEEE and the IEEE Computational Intelligence Society (CIS), contributing to the CIS Task Force on Advanced Representation in Biological and Medical Search and Optimization.
Prof. Tayo Obafemi-Ajayi
Missouri State University
IEEE SHIELD TC Chair
Tayo Obafemi-Ajayi is an associate professor of Electrical Engineering (Guy Mace Professor of Engineering) at Missouri State University (MSU) in the Engineering Program. Her research focus is on developing explainable and ethical machine learning/AI algorithms for broad utility in biomedical applications including health informatics, deep learning, multi-modal data analysis. She received the MSU Atwood Excellence in Research and Teaching award 2024 and Board of Governor's Faculty Excellence award 2025. She served as a Technical Representative on the Administrative committee of IEEE Engineering Medicine and Biology Society (EMBS) from 2023 - 2025. She also served as the chair of the IEEE CIS Bioengineering and Bioinformatics Technical Committee from 2023 - 2024.
Prof. John W. Sheppard
Montana State University
IEEE SHIELD TC Member
He holds a BS in computer science from Southern Methodist University and an MS and PhD in computer science from Johns Hopkins University. In 2007, he was elected as an IEEE Fellow "for contributions to system-level diagnosis and prognosis." Prior to entering academia, he was a Fellow at ARINC Incorporated, a defense aerospace company in Annapolis, MD where he worked for 20 years. Dr. Sheppard performs research in probabilistic graphical models, deep learning, evolutionary and swarm-based algorithms, distributed optimization, and applications to system-level test, diagnosis, and predictive health. Recently, his research has expanded into the areas of prostate cancer diagnosis, precision agriculture, and wildfire management. He has published over 200 papers in peer-reviewed conference proceedings and journals as well as three books on the subject of system-level diagnosis and systems engineering. In addition, Dr. Sheppard is active in IEEE Standards activities where, currently, he serves as the chair of the IEEE P2848 Prognostics and Health Management for Automatic Test Systems standards development working group under SCC20 and the IEEE P1232 Standard for System Diagnostic Data and Services. He is also a member of the IEEE P2976 eXplainable AI standards working group.
Dr. Chiara Gallese
Tutorial OrganizerTilburg University, The Netherlands
BBTC & SHIELD Technical Committee Member
Silvia Multari
Tutorial PresenterCa’ Foscari University of Venice
IEEE Computational Intelligence Society Member
Silvia Multari is a Ph.D. Candidate in Science and Technology of Bio and Nano Materials at the Ca’ Foscari University of Venice, Department of Molecular Sciences and Nanosystems. She holds a bachelor’s degree in Pharmaceutical Biotechnology obtained at the University of Milan, and she is now specialising in computational applications to drug discovery, with a focus on the study of molecular interactions, combining physics-based and AI-driven approaches to investigate peptide and small-molecule binding. She was a visiting student at the Technical University of Eindhoven (Eindhoven, the Netherlands) in the Molecular Machine Learning group led by professor Francesca Grisoni, and then at the Kyoto Institute of Technology (Kyoto, Japan) under the supervision of professor Giuseppe Pezzotti. Her main expertise lies in the application of molecular dynamics simulations, molecular docking and machine learning.
Assessment of biomedical datasets
fairness using FanFAIR: a tutorial
Research has shown how datasets convey social bias in AI systems, especially those based on machine learning. A biased dataset is not representative of reality and contributes to perpetuating societal biases through the model.
The core of this tutorial is the FanFAIR approach – a rule-based framework leveraging Fuzzy Logic to calculate multiple fairness metrics and condense them into a single, interpretable score. We specifically explore these concepts within the high-stakes domain of Bio-Medicine and Drug Discovery.
Learning Objectives
Bridging Law and CI: Deconstruct the EU AI Act and global regulations into actionable CI constraints.
Dataset Auditing: Implement a complete data fairness audit using the FanFAIR Python library.
Detecting Representation Bias: Identify and mitigate 'hidden' biases that compromise reliability.
Tutorial Schedule
Module 1 • 30 MIN
From Law to Logic
Foundations of Fair Datasets
Module 2 • 45 MIN
Hands-on Lab
FanFAIR Python Library
Module 3 • 15 MIN
Bio-Molecular Representation
Identifying Domain Risks
Intended Audience
Researchers, Ph.D. students, and practitioners in CI, as well as scholars from legal and ethical sectors.
Tutorial Presenters
Dr. Chiara Gallese
Tilburg University
Matteo Grazioso
Ca’ Foscari University of Venice
Silvia Multari
Ca’ Foscari University of Venice