A Deep Learning Framework for Musical Acoustics Simulations
Paper in proceeding, 2024

The acoustic modeling of musical instruments is a heavy computational process, often bound to the solution of complex systems of partial differential equations (PDEs). Numerical models can achieve a high level of accuracy, but they may take up to several hours to complete a full simulation, especially in the case of intricate musical mechanisms. The application of deep learning, and in particular of neural operators that learn mappings between function spaces, has the potential to revolutionize how acoustics PDEs are solved and noticeably speed up musical simulations. However, extensive research is necessary to understand the applicability of such operators in musical acoustics; this requires large datasets, capable of exemplifying the relationship between input parameters (excitation) and output solutions (acoustic wave propagation) per each target musical instrument/configuration. With this work, we present an open-access, open-source framework designed for the generation of numerical musical acoustics datasets and for the training/benchmarking of acoustics neural operators. We first describe the overall structure of the framework and the proposed data generation workflow. Then, we detail the first numerical models that were ported to the framework. This work is a first step towards the gathering of a research community that focuses on deep learning applied to musical acoustics, and shares workflows and benchmarking tools.

Musical Acoustics Simulations

Numerical Modeling

Acoustics Benchmarking

Datasets

Deep Learning

Author

Jiafeng Chen

University of Michigan

Kivanc Tatar

Chalmers, Computer Science and Engineering (Chalmers), Data Science and AI

Victor Zappi

Northeastern University

Proceedings of AI Music Creativity 2024

AI Music Creativity 2024
Oxford, ,

Subject Categories

Media and Communication Technology

Computational Mathematics

More information

Created

9/19/2024