20 20 20
Home     Tracks     Dates    Submission    Registration    Venue    Program    Organization    Support   


The 19th International Conference on
Modeling Decisions for Artificial Intelligence

Sant Cugat, Catalonia, Spain
30 August - 2 September, 2022

http://www.mdai.cat/mdai2022
Submission deadline:
DEADLINE EXTENDED: March 25th, 2022

INVITED TALKS


Dr. Clara Granell
University Rovira i Virgili.
Mathematical Modeling of COVID-19 from a Complex Systems Perspective

Abstract: The study of complex systems revolves around the idea of studying a system from the point of view of the interactions between its components, rather than focusing on the individual features of each of its parts. This general vision allows us to observe behaviors that would not be easily observed by studying the individual components alone. The human brain, human social networks, biological systems or transportation networks are examples of complex systems. Another example is the spreading of contagious diseases, where only the study of the whole system will allow us to understand what is the possible outcome of an epidemic. In this talk, I will introduce the mathematical models of epidemic spreading we have been developing in the past decade. We'll start with simple, compartmental models that allow us to gain a general understanding of how epidemics work. Then we will adapt these simple models to more realistic scenarios that are able to predict the evolution of COVID-19 in Spain.


Dr. Anna Monreale
University of Pisa.
Explaining Black Box Classifiers by exploiting Auto-Encoders

Abstract: Artificial Intelligence is nowadays one of the most important scientific and technological areas, with a tremendous socio-economic impact and a pervasive adoption in every field of the modern society. Many applications in different fields, such as credit score assessment, medical diagnosis, autonomous vehicles, and spam filtering are based on Artificial Intelligence (AI) decision systems. Unfortunately, these systems often reach their impressive performance through obscure machine learning models that ``hide'' the logic of their internal decision processes to humans because not humanly understandable. For this reason this models are called black box models, i.e., models used by AI to accomplish a task for which either the logic of the decision process is not accessible, or it is accessible but not human-understandable.
Examples of machine learning black box models adopted by AI systems include Neural Networks, Deep Neural Networks, Ensemble classifiers, and so on.
The missing of interpretability of black box models is a crucial issue for ethics and a limitation to AI adoption in socially sensitive and safety-critical contexts such as healthcare and law. As a consequence, the research in eXplainable AI (XAI) has recently caught much attention and there has been an ever growing interest in this research area to provide explanations on the behavior of black box models.
A promising line of research in XAI exploits local explainers also supported by auto-encoders in case it is necessary to explain black box classifiers working on non-tabular data (e.g., images, time series and texts).
The ability of autoencoders to compress any data in a low-dimensional tabular representation, and then reconstruct it with negligible loss, provides the great opportunity to work in the latent space for the extraction of meaningful explanations, for example through the generation of new synthetic samples, consistent with the input data, that can be fed to a black-box to understand where its decision boundary lies.
In this presentation we discuss recent XAI solutions based on local explainers and autoencoders that enable the extraction of meaningful explanations composed by factual and counterfactual rules, and by exemplar and counter-exemplar samples, offering a deep understanding of the local decision of the black box.


Dr. Anna Ginès Fabrellas
Esade, Universitat Ramon Llull
The labor impacts of algorithmic management

Abstract: Although it seems taken from one of the best science fiction novels, the use of algorithms and artificial intelligence for work management is already a reality. Many companies are using these systems to make decisions on the selection of people, distribution of tasks or even dismissal. The use of algorithms and artificial intelligence to adopt automated decisions in people management generates benefits. By automating some decision processes, companies can make organizational decisions quickly and efficiently, thus improving their productivity and competitiveness. In addition, the use of artificial intelligence and algorithms is often presented as an opportunity to adopt mathematically objective decisions based entirely on merit. However, contrary to this aura of objectivity, certainty and precision that surrounds artificial intelligence, the truth is that it presents important challenges and risks for workers' fundamental rights. As the European Parliament maintains in its resolution of March 2017, one of the most relevant risks posed by the use of artificial intelligence and big data today is its impact on workers' fundamental rights to privacy, data protection and non-discrimination. In this sense, the aim of the panel is to analyze the potential risks that algorithmic management poses on workers' fundamental rights, as well as new legal, technological and ethical challenges that it poses.




 
MDAI 2022

MDAI - Modeling Decisions

Vicenç Torra, Last modified: 14 : 02 June 27 2022.