ISMIR 2019

Music Nerds in Delft

November 4, 2019 — November 9, 2019

computers are awful
machine learning
machine listening
making things
signal processing
Figure 1

I was at ISMIR 2019 Delft. That is, the 20th congress of the International Society for Music Information Retrieval. I made a miscellaneous repo of stuff. Videos online

1 Tutorials

Generating Music with GANs: An Overview and Case Studies by Hao-Wen Dong and Yi-Hsuan Yang.

Waveform-based music processing with deep learning by Sander Dieleman, Jordi Pons and Jongpil Lee. I have blogged a bunch of Jordi’s work here under source separation. Sander’s presentation had some interesting framings about

  • mode-seeking versus mode-covering approximations to probablility distributions.
  • sparse versus densley conditioned conditional signals

2 Paper highlights

Papers that are useful for my own interests, that is; this is not necessaarily an indictment of any papers I do not mention.

Or… See the ISMIR paper explorer.

2.1 Source separation

  1. Spleeter (Hennequin et al. 2019) from Deezer labs is one deep learning approach
  2. Open Unmix (Stöter et al. 2019) from Sony CSL labs is another deep learning apprach
  3. UNMIXER (Smith, Kawasaki, and Goto 2019) a web UI for a cute hand-rolled matrix factorisation method

All bloggged under source separation.

2.2 Decoupled representations

A lot of the authors would like to impose a certain factorisation, or “near”-factorisation, over a latent space into humanly interpretable dimensions. So they would like to disentangle, say, timbre from pitch from loudness, or similar. I would like to return to this problem; It looks fun.

3 Data sets

All blogged under audio corpora.

4 Excellent demos


  • (Esling et al. 2019)

5 Serendipity

The So Strangely music science podcast.

6 References

Bittner, Fuentes, Rubinstein, et al. 2019. “Mirdata: Software for Reproducible Usage of Datasets.” In International Society for Music Information Retrieval (ISMIR) Conference.
Bogdanov, Porter, Schreiber, et al. 2019. “The Acousticbrainz Genre Dataset: Multi-Source, Multi-Level, Multi-Label, and Large-Scale.” In.
Choi, Keunwoo, and Cho. 2019. “Deep Unsupervised Drum Transcription.” In.
Choi, Jeong, Lee, Park, et al. 2019. “Zero-Shot Learning for Audio-Based Music Classification and Tagging.” In.
Cífka, Ondřej. 2019. Supplementary material: Supervised Symbolic Music Style Translation Using Synthetic Data.”
Cífka, Ondrˇej, and Richard. 2019. “Supervised Symbolic Music Style Translation Using Synthetic Data.” In.
Engel, Agrawal, Chen, et al. 2019. GANSynth: Adversarial Neural Audio Synthesis.” In Seventh International Conference on Learning Representations.
Engel, Hantrakul, Gu, et al. 2019. DDSP: Differentiable Digital Signal Processing.” In.
Esling, Masuda, Bardet, et al. 2019. “Flowsynth: Semantic and Vocal Synthesis Control.” In.
Foroughmand, and Peeters. 2019. “Deep-Rhythm for Tempo Estimation and Rhythm Pattern Recognition.”
Fuentes, Maia, Rocamora, et al. 2019. “Tracking Beats and Microtiming in Afro-Latin American Music Using Conditional Random Fields and Deep Learning.” In.
Hennequin, Khlif, Voituret, et al. 2019. “Spleeter: A Fast and State-of-the Art Music Source Separation Tool with Pre-Trained Models.” In.
Kalchbrenner, Elsen, Simonyan, et al. 2018. Efficient Neural Audio Synthesis.” arXiv:1802.08435 [Cs, Eess].
Lattner, Dorfler, and Arzt. 2019. Learning Complex Basis Functions for Invariant Representations of Audio.” In Proceedings of the 20th Conference of the International Society for Music Information Retrieval.
Lattner, and Grachten. 2019. High-Level Control of Drum Track Generation Using Learned Patterns of Rhythmic Interaction.” In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA 2019).
López-Serrano, Dittmar, Özer, et al. 2019. “NMF Toolbox: Music Processing Applications of Nonnegative Matrix Factorization.” In.
Luo, Agres, and Herremans. 2019. Learning Disentangled Representations of Timbre and Pitch for Musical Instrument Sounds Using Gaussian Mixture Variational Autoencoders.” In Proceedings of the 20th Conference of the International Society for Music Information Retrieval.
MacKinlay, and Botev. 2019. Mosaic Style Transfer Using Sparse Autocorrelograms.” In Proceedings of the 20th Conference of the International Society for Music Information Retrieval.
Nieto, McCallum, Davies, et al. 2019. “The Harmonix Set: Beats, Downbeats, and Functional Segment Annotations of Western Popular Music.” In.
Pati, Lerch, and Hadjeres. 2019. Learning to Traverse Latent Spaces for Musical Score Inpainting.” In Proceedings of the 20th Conference of the International Society for Music Information Retrieval.
Pfleiderer, Frieler, Abeßer, et al., eds. 2017. Inside the Jazzomat - New Perspectives for Jazz Research.
Robinson, and Brown. 2019. “Automated Time-Frequency Domain Audio Crossfades Using Graph Cuts.” In.
Smaragdis. 2004. Non-Negative Matrix Factor Deconvolution; Extraction of Multiple Sound Sources from Monophonic Inputs.” In Independent Component Analysis and Blind Signal Separation. Lecture Notes in Computer Science.
Smith, Kawasaki, and Goto. 2019. “Unmixer: An Interface for Extracting and Remixing Loops.” In.
Stöter, Uhlich, Liutkus, et al. 2019. Open-Unmix - A Reference Implementation for Music Source Separation.” Journal of Open Source Software.
Thickstun, Harchaoui, Foster, et al. 2019. “Coupled Recurrent Models for Polyphonic Music Composition.” In.
Tsuchida, Fukayama, Hamasaki, et al. 2019. “AIST Dance Video Database: Multi-Genre, Multi-Dancer, and Multi-Camera Databasefor Dance Information Processing.” In.
Wei, Wu, and Su. 2019. “Generating Structured Drum Pattern Using Variational Autoencoder and Self-Similarity Matrix.” In.
Yang, Wang, Wang, et al. 2019. Deep Music Analogy Via Latent Representation Disentanglement.” In Proceedings of the 20th Conference of the International Society for Music Information Retrieval.