illustration hub Edgedeepia
Edge IA
HUB DEEP EDGE IA

REALIZING THE BENEFITS OF COMPLEX AI IN EMBEDDED SYSTEMS

How and why should complex AI models be brought as close as possible to the field and its operational needs?

While software has made progress with models consisting of a few million parameters, advanced analysis requires either deploying multiple models with an orchestrator or using more complex models that demand hardware capable of controlling power consumption or delivering high execution speed. This leads to a memory challenge, as all the required weights must be stored.

The Deep Edge AI Hub aims to experiment with advanced embedded AI use cases on generic platforms. Its objective is to identify and analyze software and hardware deployment bottlenecks, with a particular focus on memory requirements, while also assessing the limitations of new products and services when built upon existing solutions.

Our Operating Model
A Hub with two approaches

The Hub brings together both end users who provide use cases and suppliers of technological building blocks. This creates a collaborative dynamic aimed at delivering concrete implementations that address user needs while fully leveraging the proposed technologies and understanding their limitations.

Two tracks are offered in parallel:

A generic track:
Based on two platforms—one focused on wearable systems and the other on swarm cooperation—we jointly develop a roadmap of demonstrators built around use cases shared by the partners. These advanced-stage use cases make it possible to both identify the limitations of current technologies and explore the full range of potential applications they may enable.

A specific track:
Each end-user partner proposes one use case per year, which is integrated into our platforms. This allows them to demonstrate the deployment of complex AI (multi-agent systems, large models such as VLA, VLM, etc.) within their own operational contexts and constraints. As a result, they obtain a proof of concept that can be tested directly in their environment.

The CEA brings together all its software and hardware expertise, with teams from our systems department capable of integrating sensors with AI and porting solutions onto resource-constrained hardware. We are also supported by our semantic and video analysis department, which is able to deploy large models across a wide range of architectures, as well as by our embedded AI laboratory, which specializes in designing AI deployments for highly constrained hardware.

Two generic platforms are currently under development: one based on swarm cooperation, illustrated through rover systems, and a “wearable” platform combining custom sensors with headsets or smart glasses.

Engage
Our partners

We are currently seeking partners from the automotive, healthcare, robotics, mobile computing, wearable devices, industrial IoT, smart home, and related sectors.

They will be connected with providers of image and audio analysis models, swarm collaboration solutions, and embedded hardware technologies.

If you develop products that are not connected to a power grid, require the analysis of multiple signals, need to understand their environment in order to make decisions, or require cooperation between devices—within environments that do not allow the integration of mini-PC–type computers—then this Hub is designed for you.

Learn more about our Hubs