ISMIR 2019

Music Nerds in Delft



I was at ISMIR 2019 Delft. That is, the 20th congress of the International Society for Music Information Retrieval. I made a miscellaneous repo of stuff. Videos online

Tutorials

Generating Music with GANs: An Overview and Case Studies by Hao-Wen Dong and Yi-Hsuan Yang.

Waveform-based music processing with deep learning by Sander Dieleman, Jordi Pons and Jongpil Lee. I have blogged a bunch of Jordi’s work here under source separation. Sander’s presentation had some interesting framings about

  • mode-seeking versus mode-covering approximations to probablility distributions.
  • sparse versus densley conditioned conditional signals

Paper highlights

Papers that are useful for my own interests, that is; this is not necessaarily an indictment of any papers I do not mention.

Or… See the ISMIR paper explorer.

Source separation

  1. Spleeter (Hennequin et al. 2019) from Deezer labs is one deep learning approach
  2. Open Unmix (Stöter et al. 2019) from Sony CSL labs is another deep learning apprach
  3. UNMIXER (Smith, Kawasaki, and Goto 2019) a web UI for a cute hand-rolled matrix factorisation method

All bloggged under source separation.

Decoupled representations

A lot of the authors would like to impose a certain factorisation, or “near”-factorisation, over a latent space into humanly interpretable dimensions. So they would like to disentangle, say, timbre from pitch from loudness, or similar. I would like to return to this problem; It looks fun.

Serendipity

The So Strangely music science podcast.

References

Bittner, Rachel M, Magdalena Fuentes, David Rubinstein, Andreas Jansson, Keunwoo Choi, and Thor Kell. 2019. “Mirdata: Software for Reproducible Usage of Datasets.” In International Society for Music Information Retrieval (ISMIR) Conference.
Bogdanov, Dmitry, Alastair Porter, Hendrik Schreiber, Julián Urbano, and Sergio Oramas. 2019. “The Acousticbrainz Genre Dataset: Multi-Source, Multi-Level, Multi-Label, and Large-Scale.” In, 8.
Choi, Jeong, Jongpil Lee, Jiyoung Park, and Juhan Nam. 2019. “Zero-Shot Learning for Audio-Based Music Classification and Tagging.” In, 8.
Choi, Keunwoo, and Kyunghyun Cho. 2019. “Deep Unsupervised Drum Transcription.” In, 9.
Cífka, Ondřej. 2019. Supplementary material: Supervised Symbolic Music Style Translation Using Synthetic Data,” June.
Cífka, Ondrˇej, and Gaël Richard. 2019. “Supervised Symbolic Music Style Translation Using Synthetic Data.” In, 8.
Engel, Jesse, Kumar Krishna Agrawal, Shuo Chen, Ishaan Gulrajani, Chris Donahue, and Adam Roberts. 2019. GANSynth: Adversarial Neural Audio Synthesis.” In Seventh International Conference on Learning Representations.
Engel, Jesse, Lamtharn (Hanoi) Hantrakul, Chenjie Gu, and Adam Roberts. 2019. DDSP: Differentiable Digital Signal Processing.” In.
Esling, Philippe, Naotake Masuda, Adrien Bardet, Romeo Despres, and Axel Chemla. 2019. “Flowsynth: Semantic and Vocal Synthesis Control.” In, 2.
Foroughmand, Hadrien, and Geoffroy Peeters. 2019. “Deep-Rhythm for Tempo Estimation and Rhythm Pattern Recognition,” 8.
Fuentes, Magdalena, Lucas S Maia, Martın Rocamora, Luiz W P Biscainho, Helene C Crayencour, Slim Essid, and Juan P Bello. 2019. “Tracking Beats and Microtiming in Afro-Latin American Music Using Conditional Random Fields and Deep Learning.” In, 8.
Hennequin, Romain, Anis Khlif, Felix Voituret, and Manuel Moussallam. 2019. “Spleeter: A Fast and State-of-the Art Music Source Separation Tool with Pre-Trained Models.” In, 2.
Kalchbrenner, Nal, Erich Elsen, Karen Simonyan, Seb Noury, Norman Casagrande, Edward Lockhart, Florian Stimberg, Aaron van den Oord, Sander Dieleman, and Koray Kavukcuoglu. 2018. Efficient Neural Audio Synthesis.” arXiv:1802.08435 [Cs, Eess], February.
Lattner, Stefan, Monika Dorfler, and Andreas Arzt. 2019. Learning Complex Basis Functions for Invariant Representations of Audio.” In Proceedings of the 20th Conference of the International Society for Music Information Retrieval, 8.
Lattner, Stefan, and Maarten Grachten. 2019. High-Level Control of Drum Track Generation Using Learned Patterns of Rhythmic Interaction.” In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA 2019).
López-Serrano, Patricio, Christian Dittmar, Yigitcan Özer, and Meinard Müller. 2019. “NMF Toolbox: Music Processing Applications of Nonnegative Matrix Factorization.” In.
Luo, Yin-Jyun, Kat Agres, and Dorien Herremans. 2019. Learning Disentangled Representations of Timbre and Pitch for Musical Instrument Sounds Using Gaussian Mixture Variational Autoencoders.” In Proceedings of the 20th Conference of the International Society for Music Information Retrieval.
MacKinlay, Daniel, and Zdravko I Botev. 2019. Mosaic Style Transfer Using Sparse Autocorrelograms.” In Proceedings of the 20th Conference of the International Society for Music Information Retrieval, 5. Delft.
Nieto, Oriol, Matthew McCallum, Matthew E P Davies, Andrew Robertson, Adam Stark, and Eran Egozy. 2019. “The Harmonix Set: Beats, Downbeats, and Functional Segment Annotations of Western Popular Music.” In, 8.
Pati, Ashis, Alexander Lerch, and Gaëtan Hadjeres. 2019. Learning to Traverse Latent Spaces for Musical Score Inpainting.” In Proceedings of the 20th Conference of the International Society for Music Information Retrieval.
Pfleiderer, Martin, Klaus Frieler, Jakob Abeßer, Wolf-Georg Zaddach, and Benjamin Burkhart, eds. 2017. Inside the Jazzomat - New Perspectives for Jazz Research. Schott Campus.
Robinson, Kyle, and Dan Brown. 2019. “Automated Time-Frequency Domain Audio Crossfades Using Graph Cuts.” In, 2.
Smaragdis, Paris. 2004. Non-Negative Matrix Factor Deconvolution; Extraction of Multiple Sound Sources from Monophonic Inputs.” In Independent Component Analysis and Blind Signal Separation, edited by Carlos G. Puntonet and Alberto Prieto, 494–99. Lecture Notes in Computer Science. Granada, Spain: Springer Berlin Heidelberg.
Smith, Jordan B L, Yuta Kawasaki, and Masataka Goto. 2019. “Unmixer: An Interface for Extracting and Remixing Loops.” In, 8.
Stöter, Fabian-Robert, Stefan Uhlich, Antoine Liutkus, and Yuki Mitsufuji. 2019. Open-Unmix - A Reference Implementation for Music Source Separation.” Journal of Open Source Software 4 (41): 1667.
Thickstun, John, Zaid Harchaoui, Dean P Foster, and Sham M Kakade. 2019. “Coupled Recurrent Models for Polyphonic Music Composition.” In, 8.
Tsuchida, Shuhei, Satoru Fukayama, Masahiro Hamasaki, and Masataka Goto. 2019. “AIST Dance Video Database: Multi-Genre, Multi-Dancer, and Multi-Camera Databasefor Dance Information Processing.” In, 10.
Wei, I-Chieh, Chih-Wei Wu, and Li Su. 2019. “Generating Structured Drum Pattern Using Variational Autoencoder and Self-Similarity Matrix.” In, 8.
Yang, Ruihan, Dingsu Wang, Ziyu Wang, Tianyao Chen, Junyan Jiang, and Gus Xia. 2019. Deep Music Analogy Via Latent Representation Disentanglement.” In Proceedings of the 20th Conference of the International Society for Music Information Retrieval.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.