EXPLAINABILITY: ADVANCES IN BLACK BOX MODELS

10 October 2024 | 10.00h to 10.30h | Room 1

Explainability is a crucial aspect of AI, especially in evaluating results obtained with black box models that are often difficult to understand and interpret due to an internal complexity that often remains hidden.

To achieve the scaled adoption of AI-based solutions, a relationship of trust must be built with the user. The explainability of AI is a key element in this relationship, since it allows the decisions made by the models to be elucidated and offers valuable complementary information. It is also crucial to obtaining auditable algorithms.

In this session, we will see the latest developments and examples of techniques that delve into the explainability of AI with the aim of moving towards reliable and increasingly less opaque use.

Presents:

  • Nuria Martinez, Communication & Marketing Manager, CVC

Participants:

  • Toni Manzano, CSO and co-founder, Aizon

  • Vasja Urbancic, Lead Data Scientist,  Intelygenz a VASS Company

Speakers

TONI MANZANO

CSO and co-founder, Aizon

Toni is the co-founder and CSO of Aizon, a cloud company that provides an AI SaaS platform for the Biotech and Pharma industry. He is member of the PDA Regulatory Affairs and Quality Advisory Board, active collaborator in the AI initiative for AFDO and he teaches AI subjects at the University ( URV and OBS), expert of the United Nations in AI subjects for Life Sciences. He has written numerous articles in the Pharma field and holds a dozen international patents related to the encryption, transmission, storage and processing of large volumes of data for regulated environments in the cloud. Toni is Physicist, Master in Information and Knowledge Society and post graduated in quality systems for manufacturing and research pharmaceutical processes.

AI in biopharmaceuticals: when model explainability becomes a requirement

The pharmaceutical industry has always been very conservative, partly due to the regulation that requires high levels of quality in the final product. But the implementation of AI in industrial processes requires robustness and high availability and for this reason, the cloud is presented as a powerful alternative to respond to industrial requirements. On the other hand, the existing regulation does not accept the interpretation of AI models as “black boxes” and the whole process must be validated. The use of standards such as ONNX for the management of AI models in the cloud has been key to aligning regulatory requirements with the high availability of model outputs in industrial environments.

VASJA URBANCIC

Lead Data Scientist, Intelygenz a VASS Company

Vasja has an academic background in life sciences, having studied Molecular Biology at the University of Bath and completed his PhD and postdoc in Developmental Neurobiology at the University of Cambridge, studying the developmental processes involved in the construction of biological neural networks. Vasja began his career in data science in 2016, first completing the S2DS data science training programme in London (year of 2017), and working in Data Science at a consulting company (Tessella) from 2018-2019. He has been with Intelygenz since 2019, working on a diverse variety of projects spanning natural language processing, computer vision and image generation. In the last 18 months, he has worked for a client in the fintech industry helping to construct a solution for detecting and preventing fraudulent transactions in real time.

Fraud AI: An effective and interpretable solution for fraud detection

We have implemented a highly effective fraud detection solution for a major fintech institution. The technical challenges involved have included a requirement to cover a high volume of transactions (millions per day, up to 150 per second) at a very low latency in order to enable the client to use the solution for fraud prevention in real time at the time of transaction. Implementing this solution required pre-computing a large number of interpretable features based on previous transactions of the client accounts and merchants involved in the transaction. In addition, it was very important that the solution is not presented as a black box but offers prediction explainability i.e. the importance of individual contributing features, not only on the global (dataset-wide) scale but also on the local scale, specific to each transaction. For that reason, we employed a combination of a high-performing gradient-boosting tree-based algorithm and post-hoc interpretability methods based on SHAP values.

Presents

NURIA MARTINEZ

Communication & Marketing Manager, CVC

Nuria Martínez is responsible for communication and marketing at the Computer Vision Centre (CVC). She holds a degree in Journalism and a Master’s degree in Corporate and Institutional Communication Management from the Universitat Autònoma de Barcelona. Since 2017, she joined the communication department of the CVC, where she has coordinated several projects of citizen science, dissemination and social impact of artificial intelligence, such as ExperimentAI or IAèticaBCN. Prior to joining the CVC, she collaborated in several media such as the magazines Sàpiens and Descobrir.