Skip to main content
Description

This project aims to develop a federated, multimodal AI model for real-time anomaly detection in healthcare. During the London workshop, teams will build and integrate unimodal models trained on clinical text, medical images, and physiological signals. The final multimodal model will be trained via federated learning and optimized for deployment on embedded devices such as Raspberry Pi and other microcontroller technologies. Using public datasets like MIMIC and CheXpert, the project emphasizes privacy-preserving, edge-ready medical AI. A specific focus will be given to Personalized Federated Learning (PFL), allowing each local model to be fine-tuned based on its data modality and distribution. This will improve model generalization and adaptation across heterogeneous clients. Post-workshop efforts will focus on further optimization and remote validation.

Context

Alzheimer’s disease is actually the leading cause of dementia.
It’s a progressive condition, meaning it gradually gets worse over time, and it affects the brain by slowly destroying memory and thinking skills

Project objectives

Our focus will be on constructing this multimodal model during the workshop in London, which will integrate various types of healthcare data.

Once developed, we will federate this model for distributed training across multiple devices. Personalized Federated Learning strategies such as local fine-tuning and meta-learning will also be explored to ensure better adaptation to the local distribution of each participating node.

The project is divided into several WPs that will lead to independent software bricks.

WP1 | Development of unimodal and multimodal models
WP2 | Compression/optimisation of unimodal/multimodal models
WP3 | Integration of models optimized on RPi and FPGA device 
WP4 | Personalized federated learning

Responsible(s) for the challenge