Invités PFIA

Plate Forme Intelligence Artificielle

30 juin - 4 juillet, 2025, Dijon, France

Conférenciers invités à la PFIA


Vaishak Belle

Vaishak Belle

University of Edinburgh

Journées d'Intelligence Artificielle Fondamentale (JIAF)

Titre : Neuro-symbolic Systems for Responsible AI: Challenges and Opportunities



Résumé : Machine learning (ML) techniques have become pervasive across a range of different applications, and are now widely used in areas as disparate as recidivism prediction, consumer credit-risk analysis, and insurance pricing. Likewise, in the physical world, ML models are critical components in autonomous agents such as robotic surgeons and self-driving cars. Among the many dimensions that arise in the use of ML technology in such applications, analysing properties such as correctness, fairness, and so on is both immediate and profound. In this talk, we advocate for a two-pronged approach to ethical/resposible decision-making enabled using rich models of autonomous agency: on the one hand, we need to draw on philosophical notions of such as beliefs, causes, effects and intentions, and on the other, we consider the problem of practical and effective knowledge acquisition and learning

Biographie : Dr Vaishak Belle (he/him) is Reader at the University of Edinburgh, an Alan Turing Fellow, and a Royal Society University Research Fellow. He has made a career out of doing research on the science and technology of AI. He has published close to 120 peer-reviewed articles, won best paper awards, and consulted with banks on explainability. As PI and CoI, he has secured a grant income of close to 8 million pounds. more

Thomas Fel

Thomas Fel

Prix de thèse

Titre : Lueurs d'Explicabilité : Avancées Récentes dans l'Explication de Réseaux de Neurones Profonds pour la Vision

Résumé : La thèse propose de nouveaux fondements théoriques et algorithmiques pour l'explicabilité en vision par ordinateur. Elle établit les limites intrinsèques des méthodes d'attribution locales (saliency maps) dans des environnements complexes et introduit des approches alternatives basées sur des mesures globales de sensibilité (indices de Sobol) ainsi que sur des méthodes à garanties formelles (EVA). La thèse explore également l'alignement des représentations internes des réseaux de neurones avec des formes de raisonnement humain, soit par supervision sur des annotations humaines (Harmonization), soit par contrainte fonctionnelle (ex. : fonctions 1-Lipschitz). En rupture avec l’analyse spatiale classique de l'attribution, la fin de thèse propose un changement de paradigme vers l’identification des entités conceptuelles perçues par les modèles. Ce cadre repose sur une reformulation du problème d'extraction de concepts comme un problème de dictionary learning sur les espaces de représentations neuronales, permettant une factorisation parcimonieuse et interprétable des activations internes. L'ensemble aboutit à un système unifié pour la visualisation et l'analyse interactive des concepts latents appris par des architectures telles que ResNet50.

Biographie : Bonjour, je suis Thomas Fel, chercheur au Kempner Institute de l'Université Harvard, où je travaille sur l'explicabilité des grands modèles de vision. Mes recherches visent à utiliser l'explicabilité pour mieux comprendre les mécanismes de l'intelligence, en combinant science computationnelle, mathématiques et neurosciences. J'ai effectué mon doctorat entre l'Université de Toulouse (ANITI, groupe DEEL) et Brown University, dans le laboratoire de Thomas Serre, avec le soutien de la SNCF. Après ma thèse, j'ai travaillé chez Google aux côtés de Katherine Hermann sur des problématiques d'interprétabilité et d'apprentissage de raccourcis (shortcut learning).

Rémi Flamary

Rémi Flamary

École Polytechnique

Conférence sur l'Apprentissage Automatique (CAP)

Titre : Adaptation to data shift without labels : methods, benchmark and test time adaptation with optimal transport on biomedical signals.

Résumé : Domain Adaptation (DA) is a fundamental Machine Learning challenge that aim at adapting supervised predictors to new data in the absence of labels on the new data. We will first present the problem and classical approaches that have been proposed to tackle data shift. Then we will introduce a multimodal benchmark of those methods and discuss the results, highlighting the limits of the current approaches in practice. Finally, we will focus on a specific modality, electroencephalograms (EEGs) and the problem of sleep stage classification, and present an approach based on optimal transport that allows to model, and use the specificity of the different subjects for better personalized predictions.

Biographie : Remi Flamary is Professor at École Polytechnique in the Centre de Mathématiques Appliquées (CMAP). He was previously Associate Professor at Université Cote d’Azur (UCA) and a member of Lagrange Laboratory, Observatoire de la Cote d’Azur. His current research interests include signal, image processing, and machine learning with a recent focus on applications of Optimal Transport theory to machine learning problems such as graph processing and domain adaptation. He is also the co-creator and maintainer of the Python Optimal Transport toolbox (POT).

Luis

Luis Galárraga

INRIA/IRISA

Rencontres des Jeunes Chercheurs en Intelligence Artificielle (RJCIA)

Titre : Data- and Human-aware Explainable AI

Résumé : The ubiquity of AI-supported technologies in today’s society, along with their potential risks, has been the main driver behind the growing body of research in eXplainable AI (xAI). On one hand, the machine learning and AI communities have addressed this challenge by proposing a wide range of methods aimed at opening the black box of AI. However, most of these efforts have focused on the data and functional aspects of explainability, often overlooking the fact that the ultimate recipients of these explanations are humans—whose perceptions of algorithmic outcomes have rarely been evaluated. On the other side of the spectrum, the HCI and human-centered AI communities have extensively studied the cognitive aspects of providing explanations for AI-based recommendations. This has typically involved the execution of in-depth user studies that assess the effectiveness of every design decision within the explanation pipeline. In this talk, I will first provide a high-level overview of work conducted by both the data-centric and human-aware xAI sub-communities. I will then discuss the limitations of approaching the problem from a single perspective. Finally, I will offer insights into how future research in explainable AI can be both data- and human-aware, ensuring it fulfills its original purpose: making AI algorithms transparent and understandable for us.

Biographie : Luis Galárraga is a full-time researcher at the IRISA/Inria Rennes research center, which he joined in 2017. His research lies at the crossroads of three axes: pattern mining, knowledge graphs, and eXplainable AI. He has a particular interest in rule mining approaches on large knowledge graphs, as well as in provenance management on RDF graphs. His work on eXplainable AI focuses both on the functional and human dimensions of explanations for black-box models trained on tabular data, time series, knowledge graphs, and textual corpora. The ultimate goal of his research is to compute explanations that are faithful and stable, but also trustworthy and fully understandable to their human recipients. He is one of the founders and recurrent organizer of the AIMLAI workshop on Advances in Interpretable Machine Learning and AI since 2019.

Christian Hennig

Christian Hennig

University of Bologna

Rencontres de la Société Francophone de Classification (SFC)

Titre : On decision making in cluster analysis



Résumé : There are many different approaches to cluster analysis, and when applied to the same data, different methods will pften produce quite different clusterings. Data analysts do not only have to choose a clustering method, also pre- and postprocessing decisions need to be made, such as selection and transformation of features and the number of clusters.

Making all the required decisions is very difficult. As there is no unique definition of the clustering problem, neither is there a unique or "optimal" way to measure the quality of a clustering, and the data alone do not hold all the information required to make these decisions. This is a big challenge for automatising cluster analysis for machine learning in particular.

I will discuss some of the required decisions and quality criteria, illustrating problems with automated decision making, and how background knowledge and techniques such as data visualisation can help.

Biographie : Christian Hennig is Full Professor at the Department for Statistical Science "Paolo Fortunati", University of Bologna, Italy. Before, he was Senior Lecturer at University College London and Lecturer at the University of Hamburg and ETH Zürich.

He did statistical advisory for more than 100 clients. He is author of the popular foc and prabclus R-packages. He is first editor of the Handbook of Cluster Analysis and has written key references on robustness in cluster analysis, cluster stability, and choosing the number of clusters. Other work concerns the philosophical foundation of statistical modelling, objectivity and subjectivity in statistics, identifiability, and the role of model assumptions.

Simon Lucas

Simon Lucas

Queen Mary, University of London

Conférence Nationale en Intelligence Artificielle (CNIA)

Titre : Simulation-Based AI with LLMs



Résumé : Despite amazing progress in generative AI, even the largest and smartest large language models have serious and insurmountable limitations in their reasoning abilities.

On the other hand, simulation-based AI agents make intelligent decisions based on the statistics of simulations using a forward model of a problem domain, providing a complementary type of intelligence. SBAI algorithms have very attractive properties, including instant adaptation to new problems, tunable intelligence and some degree of explainability.

In this talk I outline some of the main algorithms in the area, and discuss how combining them with the best features of LLMs can lead to new types of rapidly adaptive intelligence agents and demonstrate results on some interesting problems.

Biographie : Simon Lucas is a full professor of AI in the School of Electronic Engineering and Computer Science at Queen Mary University of London where he leads the Game AI Research Group. He was previously Head of School of EECS at QMUL. He recently spent two years as a research scientist / software engineer in the Simulation-Based Testing team at Meta, applying simulation-based AI to automated testing.

Simon was the founding Editor-in-Chief of the IEEE Transactions on Games and co-founded the IEEE Conference on Games, was VP-Education for the IEEE Computational Intelligence Society and has served in many conference chair roles. His research is focused on simulation-based AI (e.g. Monte Carlo Tree Search, Rolling Horizon Evolution), bandit-based optimisation, and LLMs.

Nardine Z Osman

Nardine Z Osman

Artificial Intelligence Research Institute (IIIA-CSIC)

Journées Francophones sur les Systèmes Multi-Agents (JFSMA)

Titre : Value-Engineering

Résumé : One of today’s most pressing societal challenges is the development of AI systems whose behaviour, or the behaviour they enable within communities of interacting agents (both human and artificial), aligns with human values. This requires developing AI systems that can identify, understand, and reason about human values, along with explaining their behaviour in terms of those values. The "value engineering" research area addresses this challenge. This talk will introduce the value engineering challenge, and present some of the advances in this research line, including a framework for modelling human values and mechanisms for enabling value awareness in socio-technical systems. We showcase the impact of this work through some real-life applications.

Biographie : Nardine Osman is a senior researcher at the Artificial Intelligence Research Institute (IIIA), part of the Spanish National Research Council (CSIC). She earned her PhD in Informatics from the University of Edinburgh, UK, in 2008. Her research expertise covers multiagent systems, normative systems, trust and reputation, and formal verification. Leveraging this expertise, she has been focusing recently on value engineering in AI, a field aimed at developing systems that can identify and understand human values, reason according to those values, and explain behaviour in terms of them. Nardine currently leads the Ethics and AI research line at IIIA-CSIC and is actively involved in major projects. She heads the €1.8M Spanish project VAE (Value-Awareness Engineering) and contributes to the EU-funded VALAWAI project on value awareness in AI. Additionally, she has recently co-founded the International Workshop on Value Engineering in AI with Luc Steels, which is now in its third edition at ECAI 2025.

Louis-Martin Rousseau

Louis-Martin Rousseau

Polytechnique Montréal, Canada

Journées Francophones de Programmation par Contraintes (JFPC)

Titre : En route vers des solveurs neuro-symbolique en programmation par contrainte.



Résumé : La présentation examine comment l’apprentissage automatique peut améliorer la programmation par contraintes (CP), en mettant en avant l’intégration de techniques d’apprentissage pour optimiser la recherche et la propagation dans les solveurs de CP. Elle s’appuie sur les avancées récentes en programmation mathématique pour démontrer comment l’apprentissage peut renforcer les heuristiques de sélection des valeurs et des variables, notamment à travers l’apprentissage par renforcement et les réseaux de neurones graphiques. Une attention particulière est accordée à l’amélioration des bornes duales grâce à la relaxation lagrangienne et à la décomposition, illustrant comment l’apprentissage peut accélérer ces méthodes tout en augmentant leur efficacité. Enfin, la présentation insiste sur l’importance d’une approche hybride qui allie intelligence artificielle et raisonnement logique, afin de doter les solveurs de capacités accrues et d’améliorer la résolution des problèmes combinatoires de manière plus performante et adaptable.

Biographie :Louis-Martin Rousseau est professeur au département de mathématiques et de génie industriel à Polytechnique Montréal depuis plus de 20 ans. Spécialiste de l’intelligence artificielle, de la recherche opérationnelle et de la science de la gestion, il est reconnu internationalement pour ses contributions aux problèmes d’optimisation combinatoire, notamment dans les domaines de la génération de colonnes, de la logistique du transport, de l’ordonnancement et de l’optimisation des ressources en santé. Il est titulaire de la Chaire de recherche du Canada en logistique des soins de santé (HANALOG) depuis 2016 grace à laquelle il mène des recherches visant à améliorer la planification et l’efficacité des services hospitaliers.

Marieke van Erp

Marieke van Erp

KNAW Humanities Cluster

Ingénierie des Connaissances (IC)

Titre : Layering Knowledge to Unpack the Layers of Meaning in Historical Texts



Résumé : Historical texts present computational analyses with many different challenges: digitisation artefacts, segmentation, language evolution, and changing societal values. In this talk, I will present various interdisciplinary projects that my team has worked and is working on that address these challenges. Our use cases range from government and company records, to literature, letters, newspapers, and cookbooks - all spanning centuries. The approaches we use depend on the best tool for the job: rules, machine learning, prompt engineering - but all informed by domain expertise leading to applications that are used by researchers and the general public to make sense of big historical data.

Biographie : Marieke van Erp is a Language Technology and Semantic Web expert engaged in interdisciplinary research. She holds a PhD in computational linguistics from Tilburg University and has worked on many (inter)national interdisciplinary projects. Since 2017, she has been leading the Digital Humanities Research Lab at the Royal Netherlands Academy of Arts and Sciences Humanities Cluster. She is one of the founders and scientific directors of the Cultural AI Lab, a collaboration between 8 research and cultural heritage institutions in the Netherlands aimed at the study, design and development of socio-technological AI systems that are aware of the subtle and subjective complexity of human culture. In January 2023, she was awarded an ERC Consolidator project that will investigate how language and semantic web technologies can improve the creation of knowledge graphs supporting humanities research.

Chi Wang

Chi Wang

Google DeepMind

Conférence Nationale sur les Applications Pratiques de l'Intelligence Artificielle (APIA)

Titre : AG2: Open-Source AgentOS for Agentic AI



Résumé : This presentation will address the future landscape of AI applications and the ways in which we can enable every developer to create them. It will examine the trend of agentic AI and the fundamental design considerations for agentic AI operating systems. Subsequently, it will explore a pioneering initiative, AG2, outlining the primary concepts and its application across a diverse range of tasks and industries, achieving top rankings in challenging benchmarks, and leading research advancements. The talk will conclude with open questions.

Biographie : Chi is founder of AG2 (formerly known as AutoGen), the open-source AgentOS to support agentic AI, and its parent open-source project FLAML, a fast library for AutoML & tuning. He has received multiple awards such as best paper of ICLR’24 LLM Agents Workshop, Open100, and SIGKDD Data Science/Data Mining PhD Dissertation Award. Chi runs the AG2 community with 20K+ members. He has 15+ years of research experience in Computer Science and work experience in Google DeepMind, Microsoft Research and Meta.