speaker-photo

Pierre-Alain Moellic

Research Engineer, CEA-Leti

Bio:

Pierre-Alain Moellic is a research engineer at CEA-LETI in the Security of Embedded Systems laboratory. He is member of a joint research team between CEA-LETI and Mines Saint-Etienne (IMT) at the Center of Microelectronic of Provence Georges Charpak (Gardanne) that gathers cutting-edge equipment for hardware security characterization and testing. His works essentially focus on the interaction between physical attacks and security of machine learning. He is the coordinator of PICTURE (ANR) dedicated to the security of embedded neural network models, the national coordinator for the European program InSecTT on "AIoT" and leads the "Security of AI" topic in the PULSE program (IRT Nanoelec).

Abstract:

The large-scale deployment of machine learning (ML) models in a large variety of hardware platforms raises many security issues methodically studied and demonstrated by both the adversarial and privacy-preserving ML communities. Nowadays, some worrying attacks are popular such as the adversarial examples that aim at optimally altering the inference inputs to fool the prediction of a state-of-the-art model, especially deep neural networks. Another important threat – known as model extraction - concerns the confidentiality of a protected black-box model that an adversary want to clone or steal its performance. Until recently, most of the attacks were considering a target model as a pure abstraction, relying essentially on API-based strategies, i.e. exploiting a set of inputs/outputs and some knowledge about the model and the data. In this talk, we highlight recent threats on implementation-based attacks that leverage software or hardware features of the a deployed model. More particularly, we describe adversarial attacks that directly alter the internal parameters stored in memory (e.g., DRAM or Flash) and physical attacks (side-channel and fault injection analysis) for model extraction. With such an attack surface, the development of defenses is as challenging as urgent and must be guided by sound and robust evaluation protocols.

10:10 a.m. - 10:30 a.m.

Thursday Tomorrow’s Cybersecurity AM

Research Engineer, CEA-Leti

Advanced security threats against embedded AI: protecting models against algorithmic and implementation-based attacks... more info