I am an Audio ML Engineer at Sonaid, where I work at the intersection of deep learning and audio signal processing. My work focuses on developing AI models for audio event recognition, handling the full DL pipeline from dataset creation to embedded deployment.
I hold a PhD from Université Grenoble Alpes, where I researched speaker counting and localization using deep learning, with a focus on spatial audio. My doctoral research was conducted under a partnership between Orange Labs and GIPSA-lab.
My research interests span audio deep learning, sound event detection, spatial audio processing, and neural audio synthesis. I'm particularly passionate about bridging the gap between cutting-edge research and real-world audio applications.
Beyond work, I am passionate about music creation: I have been playing piano for over 25 years, composing and arranging pieces while exploring music production and sound design. I have also been practicing volleyball since my teenage years, competing at the pre-national level. In my spare time, I enjoy reading about science and history, playing chess, diving, hiking, and exploring the world.
Projects
Transform audio timbre using neural synthesis
Generate realistic sound effects from text descriptions