GENERATIVE AI: LLMs VS SLMs

10 October 2024 | 09.00h to 10.00h

Many generative AI applications are based on large language models (LLM), such as the different versions of Open-AI’s GPT. These kinds of training models require million-dollar investments that are only within reach of a small number of companies or institutions.

Small language models (SLMs) require less computational power for their training, operation and deployment. Despite some limitations in the complexity of the tasks they can perform, in some cases SLMs can be used as an alternative to LLMs.

In this session, we will see real examples and use cases of applications that illustrate the use of LLMs or SLMs. Aspects such as efficiency, performance, deployment and reasoning capabilities are analysed for choosing one option or the other.

Presents:

  • José Perona, Data and analytics Manager – Generative AI, NTT DATA

Participate:

  • Oriol Alàs, Researcher Applied Artificial Intelligence (AAI), Eurecat

  • Karina Gibert, Professor, Director of IDEAI -UPC and Dean of COEINF

  • Ramon Serrallonga,CTO & Chief AI Scientist, InnoAnalytiX

Speakers

ORIOL ALÀS

Researcher Applied Artificial Intelligence (AAI), Eurecat

He is a researcher in the Applied Artificial Intelligence (AAI) unit at Eurecat. He holds a degree in Computer Engineering with a specialization in computing and a master’s degree in Computer Engineering, both from the University of Lleida (UdL). Since joining Eurecat, he has been involved in data analytics, artificial intelligence and data engineering projects. Currently, he is working on Computer Vision and Natural Language Processing projects, applying cutting-edge models and algorithms to address challenges such as object detection, anomaly detection or data generation.

Dynamic Text Adaptation with fRAGments: A Modular Approach to Retrieval-Augmented Generation

We introduce fRAGments, a modular Retrieval-Augmented Generation (RAG) architecture that leverages SLMs to flexibly adapt local text databases into locally deployed text assistants. This architecture aims to overcome two major challenges: it allows for the automatic configuration of these systems regardless of the local files stored, and it mitigates the drawbacks associated with cloud-based systems.

KARINA GIBERT

Professor, Director of IDEAI-UPC and Dean of COEINF

Full Professor at the Universitat Politècnica de Catalunya-BarcelonaTech (UPC). Bachelor, Master and PhD in Computer Science with specialisations in computational statistics and artificial intelligence. Director and co-founder of the Intelligent Data Science and Artificial Intelligence research centre at UPC (IDEAI-UPC, 2018-). Dean of the Illustrious Official College of Informatics Engineering of Catalonia (COEINF, 2023-). Expert and co-author of the Catalan Strategy for Artificial Intelligence Catalonia.AI (Catalan Government, 2018-). Advisor to the Catalan, Spanish, and European Commission governments on AI ethics and digital transformation. Awards: WomenTech Award 2023 (Women360), National Award for Informatics Engineering 2023 (General Council of InformaticsEngineering Colleges of Spain), Ada Byron Award 2022 (College of Computer Engineering of Galicia), donaTIC2018 Award (GenCat). Creu Casas Mention 2020-2021 (IEC).

The tension between Big Data and Small Data in language models

Traditionally, the explosion we have experienced in the field of AI in recent years is based on the existence of Big Data, language models are no exception. The deployment of generative AI relies on huge databases to train language models. But we find that actually the world is not as Big Data-driven as we would like, and there are often environments without Big Data available where AI has a lot to say. And there are also applications of computational linguistics in smaller languages. Catalan, in fact, would be halfway there, and stabilising language algorithms for minority or smaller or medium-sized languages such as Catalan presents its challenges. This talk will reflect on the limitations for smaller languages, the limitations of generative AI, language models and the alternatives that AI offers in these circumstances.

RAMON SERRALLONGA

CTO & Chief AI Scientist, InnoAnalytiX

Ramon Serrallonga has a bachelor’s degree in economics from Universitat Autonoma de Barcelona, and an MBA from ESADE Business School. His international professional career has developed in the technology sector for different industries, focusing on Artificial Intelligence for the last 9 years, and holding both managerial and technical responsibilities. He cofounded InnoAnalytiX and currently serves as the CTO and Chief AI Scientist.

Revolutionizing Management with just an SLM

Many companies struggle to extract the power of Generative AI when trying to solve concrete downstream tasks. We will uncover the recipe to get what you want from Generative AI and inside your budget, using a successful case in Management Consulting. Answering common questions like: should we use a proprietary model or an open-source one? Should we use RAG or fine-tuning? Should we use an LLM or an SLM? And so much more.

Presents

JOSÉ PERONA

Data and analytics Manager – Generative AI, NTT DATA

José Perona graduated as a Telecommunications Engineer from the Universidad Carlos III de Madrid, with over 8 years of experience dedicated to artificial intelligence. He has worked in social robotics, participating in projects in collaboration with the MIT Media Lab, and has specialized in Natural Language Processing (NLP) and Computer Vision, focusing on the most disruptive technologies. His current responsibilities include leading projects related to Generative AI and coordinating the innovation center for Large Language Models (LLMs) for the EMEAL region.