Explicability and trust, keys to scaling AI
28 September 2023 | 11.45h to 12.45h
To achieve the scaling adoption of AI-based solutions, we must build trust with users. The explicability of AI is key to this relationship, since it can elucidate decisions made by models and offers additional valuable information. It is also crucial to obtaining auditable algorithms.
In this session, we will see the latest developments and examples of techniques that further the explicability of AI aimed at moving towards reliable and less opaque use.
Presenta:
-
Gemma Batlle, Business Development Manager, Eurecat
Participates:
-
Ricardo Moya, Technological Specialist in Artificial Intelligence & Big Data of Telefónica I+D
-
Karina Gibert, professor, director of IDEAI and dean of the COEINF
-
Joan Vidal, Director de Risk Analytics of CaixaBank
-
Javier Matamoros, Arquitecto IA of CaixaBank Tech
Speakers
RICARDO MOYA
Technological Specialist in Artificial Intelligence & Big Data of Telefónica I+D
PhD in Computer Science and Technology and Computer Engineer from the UPM, specializing in Artificial Intelligence. He is currently employed at Telefónica I + D as a Technological Specialist in AI and Big Data, where he designs and leads the development of Artificial Intelligence solutions. He also teaches AI as a professor at the Masters and University levels. He has worked as a researcher on projects including recommendation systems, deep and machine learning, and as a Project Director at DEVO, where he led real-time Big Data initiatives. He is the Co-Founder of the Artificial Intelligence Professional Association “AI-Network” and the Co-Founder of the website jarroba.com, where he produces technical content about AI and Computer Science.
Local Interpretable Data Explanations Method with XAIoGraphs
Telefónica revealed its 5 AI principles in October 2018, one of which is “transparency and explainability,” ensuring that AI decisions provide an acceptable level of comprehension and explainability. Telefónica’s R&D teams have proposed innovative explainability and interpretability methods that satisfy their business needs, one of which is LIDE (Local Interpretable Data Explanations), which is capable of calculating explanations based simply on data. This approach has been implemented in XAIoGraphs, an Open Source tool described in the study.
KARINA GIBERT
Professor, director of IDEAI and dean of the COEINF
A professor at the Polytechnic University of Catalonia-BarcelonaTech (UPC), Karina Gibert has a degree and a PhD in Computer Engineering, specialised in Computational Statistics and Artificial Intelligence, and has completed postgraduate studies in University Teaching. She is a member of the government work team Catalonia.AI (Oct 2018 – present) and a member of the group of experts who write the Strategic Plan for AI in Catalonia (Gencat, approved on 2 February 2020). Dean of the COEINF since June 2023 and was the former Vice Dean of Big Data, Data Science and Artificial Intelligence (2017-2020). She is the secretary and co-founder of the Intelligent Data Science and Artificial Intelligence Research Centre (IDEAl) (2017 – present) and the founder of the COEINF Commission for the Gender Gap in Computer Engineering (May 2018 – present), a member of the Intercollegiate Gender Commission (2019 – present) and other gender commissions. Elected member of the board of directors of the International Environmental Modelling and Software Society (IEMS, as of July 2016), she is also an advisor to the High-Level Expert Group on Artificial Intelligence of the European Commission (September 2019 – present) on issues of ethics of Artificial Intelligence and the Spanish Senate.
Explainability or the insertion of data into decision processes
JOAN VIDAL
Risk Analytics Director at CaixaBank
Joan Vidal has a degree in mathematics from the University of Barcelona, a postgraduate degree in financial mathematics from the UPC and an MBA from the Instituto de Empresa. After a few years of teaching at the Faculty of Mathematics at the University of Barcelona, he began his career of +20 years in the financial sector (mainly Santander and CaixaBank), always linked to risk modeling aspects. He is currently Head of Risk Analytics at CaixaBank, where, among other things, he has led the transition to the development and use of machine learning models for credit risk management processes.
JAVIER MATAMOROS
AI Architect at CaixaBank Tech
Javier Matamoros Morcillo, PhD (Sabadell, 1982). Javier obtained his MSc in Telecommunications from the UPC in 2005 and his PhD in signal processing from the UPC in 2010 (Cum Laude). Before joining the CaixaBank group, he worked as a researcher for +10 years at the CTTC (Centre Tecnològic de Telecommunications de Catalunya). During this period, he carried out numerous research works on distributed optimization, distributed estimation and information theory applied to different verticals (IoT, communications, ML, and Smart Grids). He also directed 3 doctoral theses and participated in several national and European research projects with different roles (Researcher and Principal researcher). During this time, he published several research articles in high-impact journals and conferences (+50, h-index 15). In 2017, he joined the CaixaBank Cognitive team to provide transversal AI solutions to the group. In 2020, Javier joined CaixaBank Tech, where his role is AI Architect and technical lead. He is now participating in the design and development of cutting-edge AI solutions with high impact on the organization.
Integrating AI at CaixaBank: transparency, efficiency and innovation
CaixaBank, ever the pioneer in the use of new technologies, has been strongly promoting the use of AI in different verticals within the organisation. In this session, we will explain various applications and challenges, from models with stiff regulator demands, such as rating models, to the use of AI in crosscutting applications.
Firstly, we will review the main challenges that have been addressed and overcome by the risk modelling team. These challenges range from modelling methodologies to essential elements to win the trust of users and regulators. They include explicability, fairness and governance, which are now part of the hype about AI Ethics and potential AI regulations.
We will continue the session by outlining the crosscutting uses of AI in your organisation. Along these lines, CaixaBank Tech’s AI CoE will explain different applications within the organisation. We will start with the use of computer vision and NLP techniques for the automation of document management. Finally, we will explore the potential uses of foundational algorithms/LLMs within our organisation.
Presents
GEMMA BATLLE
Business Development Manager, Eurecat
Gemma Batlle is the head of ICT business development and the public sector at EURECAT (Technology Centre of Catalonia). She has a degree in Telecommunications Engineering from La Salle, an MBA from BES La Salle, Ramon Llull University and the University of Barcelona and a post-graduate certificate in Building Automation and Control from BES La Salle. She sits on the board of directors of Clúster Digital of Catalonia. At Eurecat, she is the head of the ICT and public sectors. She is currently working to establish patterns of collaboration and business among technology-based companies to promote and create innovative ideas and projects for the ICT sector. She previously worked as the head of business development at La Salle Parque de Innovación, where she was responsible for identifying the different areas of knowledge and innovation to continue cooperation between companies. Before that, she worked as a coordinator in the Construction Technological Innovation Department at La Salle URL, in the Technology Transfer division.




