Abstract Musicians and audio engineers sculpt and transform their sounds by connecting multiple processors, forming an audio processing graph. However, most deep-learning methods overlook this real-world practice and assume fixed graph settings. To bridge this gap, we develop a system that reconstructs the entire graph from a given reference audio. We first generate a realistic graph-reference pair dataset and train a simple blind estimation system composed of a convolutional reference encoder and a transformer-based graph decoder. We apply our framework to singing voice effects and drum mixing estimation tasks; evaluation results show that our method can reconstruct complex signal routings, including multi-band processing and sidechaining. Audio SamplesSinging Voice Effect Estimationon seen speakers: 1-10 11-20 21-30 31-40 on unseen speakers: 1-10 11-20 21-30 31-40 Drum Mixing Estimation on seen kits: 1-10 11-20 21-30 31-40 on unseen kits: 1-10 11-20 21-30 31-40 Figures
|