Versions

TODOs

  • New processors: higher-order filters, such as Moog ladder [Zav20], stacked SVFs [WM20], and butterworth filters. A Linkwitz-Riley crossover. Differentiable artificial reverberations, including velvet noise-based ones and feedback delay networks [DSPSValimaki23, LCL22]. Higher-order allpass filter, e.g., (frequency-dependent) Schroeder allpass. Factorized compressors and noisegates. Limiters. Memoryless nonlinearities [PP24]. Modulation effects. Simple oscillators with modulation capabilities. ADSR envelopes.

  • New containers: a multiband processor based on Linkwitz-Riley crossover

v0.6.0

v0.5.x

Pre-Release

A preliminary version of this library was created for work Blind Estimation of Audio Processing Graph [LPPL23]. Its aim was to create a simple baseline that can predict a graph from its output audio (or also with input audio). At that time (Summer 2022), literature on the differentiable audio processors (and their efficient computation in GPU) was not as rich as now. This led us to re-implement various processors in jax to run both the forward and backward pass efficiently in CPU with jax.compile. Our hope was that, if the forward pass is written correctly, the parameter optimization with gradient descent should work as well. Of course, this was not the case; for example, the modulation effects were not trained at all (now, we know why: [CKBB23, HSF23]). Furthermore, the backpropagation through the graphs (even with ten nodes) was still too slow to be practical. Consequently, we decided to only use the graph engine for the forward passes and the training of the graph and parameter predictors was done with a simple “parameter loss.”

After a year, we decided to revisit this idea of differentiable audio processing graphs as many advances on the differentiable processors were made in the meantime [BSEP23, CKBB23, Col23, CR23, HSF23, HSF+23, MS23, SWR23, YXT+23]. This led us to the current version of GRAFX, which is entirely based on PyTorch (in the current state, whether the backend is PyTorch or jax do not matter much, but we used the former for its popularity and ease of use). This library [LMRL+24b] was developed along with the companion work Searching For Music Mixing Graphs: A Pruning Approach [LMRL+24a]. Its motivation was, unlike the previous work [LPPL23], we wanted to find graphs and their parameters that matches the real-world music mixture so that we do not need to rely on the previous synthetic data when training the neural networks.