Neural audio instruments: epistemological and phenomenological perspectives on musical embodiment of deep learning
Journal article, 2025

Neural Audio is a category of deep learning pipelines which output audio signals directly, in real-time scenarios of action-sound interactions. In this work, we examine how neural audio-based artificial intelligence, when embedded in digital musical instruments (DMIs), shapes embodied musical interaction. While DMIs have long struggled to match the physical immediacy of acoustic instruments, neural audio methods can magnify this challenge, requiring data collection, model training and deep theoretical knowledge that appear to push musicians toward symbolic or conceptual modes of engagement. Paradoxically, these same methods can also foster more embodied practices, by introducing opaque yet expressive behaviors that free performers from rigid technical models and encourage discovery through tactile, real-time experimentation. Drawing on established perspectives in DMI embodiment literature, as well as emerging neural-audio-focused efforts within the community, we highlight two seemingly conflicting aspects of these instruments: on one side, they inherit many "disembodying" traits known from DMIs; on the other, they open pathways reminiscent of acoustic phenomenology and soma, potentially restoring the close physical interplay often missed in digital performance.

neural audio instruments

deep learning

digital musical instruments

latent space

embodied interaction

music performance

artificial intelligence

neural audio

Author

Victor Zappi

Northeastern University

Kivanc Tatar

University of Gothenburg

Chalmers, Computer Science and Engineering (Chalmers), Data Science and AI

Frontiers in Computer Science

26249898 (eISSN)

Vol. 7 1575168

Subject Categories (SSIF 2025)

Other Engineering and Technologies

Computer Sciences

Music

DOI

10.3389/fcomp.2025.1575168

More information

Latest update

9/18/2025