Generating Music with GANs: An Overview and Case Studies by Hao-Wen Dong and Yi-Hsuan Yang.
Waveform-based music processing with deep learning by Sander Dieleman, Jordi Pons and Jongpil Lee. I have blogged a bunch of Jordi’s work here under source separation. Sander’s presentation had some interesting framings about
- mode-seeking versus mode-covering approximations to probablility distributions.
- sparse versus densley conditioned conditional signals
Papers that are useful for my own interests, that is; this is not necessaarily an indictment of any papers I do not mention.
Or… See the ISMIR paper explorer.
- Obviously I like my paper (MacKinlay 2019) and think it is the best and most eloquently explained.
- Keunwoo Choi’s Drummernet (K. Choi and Cho 2019) looks like a cunning hack to transcribe drums from audio, by learning to play a drum synthesizer.
- (J. Choi et al. 2019) claims to solve a lot of the notorious problems with noisy labelling in music with a zero short learning model.
- Stefan Lattner’s Drumnet (Lattner and Grachten 2019) is a remarkably simple model for rhythm generation.
- Magdalena Fuentes et al, on detecting microtime in Afro Latin rhythms was super fun (Fuentes et al. 2019).
- Work by Ashis Pati et al is nice. Learning to Traverse Latent Spaces for Musical Score Inpainting (Pati, Lerch, and Hadjeres 2019).
- Generating Structured Drum Pattern Using Variational Autoencoder and Self-similarity Matrix (Wei, Wu, and Su 2019) I hope to track these folks down but we are presenting our research at the same time. But this covariance structure appeals to me.
- Supervised symbolic music style translation using synthetic data (Ondrˇej Cífka and Richard 2019) is kind of an automated Señor Coconut.
- Spleeter (Hennequin et al. 2019) from Deezer labs is one deep learning approach
- Open Unmix (Stöter et al. 2019) from Sony CSL labs is another deep learnign apprach
- UNMIXER (Smith, Kawasaki, and Goto 2019) a web UI for a cute hand-rolled matrix factorisation method
All bloggged under source separation.
A lot of the authors would like to impose a certain factorisation, or “near”-factorisation, over a latent space into humanly interpretable dimensions. So they would like to disentangle, say, timbre from pitch from loudness, or similar. I would like to return to this problem; It looks fun.
- Coupled Recurrent Models for Polyphonic Music Composition (Thickstun et al. 2019). It is phrased as a neural network problem, but their central question is, to my mind: What graphical model structure best approximates polyphonic scores?
- Hanoi Hantrakul presenting Fast and Flexible Neural Audio Synthesis. The oral presentaiton turned out to be an advertisement for the successor project, Differentiable DSP.
- Yin-Jyun Luo et al have done something interesting in Learning Disentangled Representations of Timbre and Pitch for Musical Instrument Sounds Using Gaussian Mixture Variational Autoencoders Check out the demo page. (Luo, Agres, and Herremans 2019)
- Learning Complex Basis Functions for Invariant Representations of Audio (Lattner, Dorfler, and Arzt 2019); Here it is about finding basis function which preserve a priori symmetries. Appended to the sparse coding page.
- Deep Music Analogy Via Latent Representation Disentanglement (Yang et al. 2019)
- mirdata, (Bittner et al. 2019)
- The AcousticBrainz Genre Dataset (Bogdanov et al. 2019)
- Harmonix set (Nieto et al. 2019)
- AIST Dance DB Dance videos! (Tsuchida et al. 2019)
All blogged under audio corpora.
- https://acids-ircam.github.io/flow_synthesizer/ (Esling et al. 2019)
- Differentiable DSP
- Orb composer
- Leigh Smith and the LANDR semantic audio search Selector
The So Strangely music science podcast.
Bittner, Rachel M, Magdalena Fuentes, David Rubinstein, Andreas Jansson, Keunwoo Choi, and Thor Kell. 2019. “Mirdata: Software for Reproducible Usage of Datasets.” In International Society for Music Information Retrieval (ISMIR) Conference.
Bogdanov, Dmitry, Alastair Porter, Hendrik Schreiber, Julián Urbano, and Sergio Oramas. 2019. “The Acousticbrainz Genre Dataset: Multi-Source, Multi-Level, Multi-Label, and Large-Scale.” In, 8.
Choi, Jeong, Jongpil Lee, Jiyoung Park, and Juhan Nam. 2019. “Zero-Shot Learning for Audio-Based Music Classification and Tagging.” In, 8.
Choi, Keunwoo, and Kyunghyun Cho. 2019. “Deep Unsupervised Drum Transcription.” In, 9.
Cífka, Ondrˇej, and Gaël Richard. 2019. “Supervised Symbolic Music Style Translation Using Synthetic Data.” In, 8.
Cífka, Ondřej. 2019. “Supplementary Material: Supervised Symbolic Music Style Translation Using Synthetic Data,” June. https://doi.org/10.5281/zenodo.3250606.
Engel, Jesse, Kumar Krishna Agrawal, Shuo Chen, Ishaan Gulrajani, Chris Donahue, and Adam Roberts. 2019. “GANSynth: Adversarial Neural Audio Synthesis.” In Seventh International Conference on Learning Representations. http://arxiv.org/abs/1902.08710.
Engel, Jesse, Lamtharn (Hanoi) Hantrakul, Chenjie Gu, and Adam Roberts. 2019. “DDSP: Differentiable Digital Signal Processing.” In. https://openreview.net/forum?id=B1x1ma4tDr.
Esling, Philippe, Naotake Masuda, Adrien Bardet, Romeo Despres, and Axel Chemla. 2019. “Flowsynth: Semantic and Vocal Synthesis Control.” In, 2.
Foroughmand, Hadrien, and Geoffroy Peeters. 2019. “Deep-Rhythm for Tempo Estimation and Rhythm Pattern Recognition,” 8.
Fuentes, Magdalena, Lucas S Maia, Martın Rocamora, Luiz W P Biscainho, Helene C Crayencour, Slim Essid, and Juan P Bello. 2019. “Tracking Beats and Microtiming in Afro-Latin American Music Using Conditional Random Fields and Deep Learning.” In, 8.
Hennequin, Romain, Anis Khlif, Felix Voituret, and Manuel Moussallam. 2019. “Spleeter: A Fast and State-of-the Art Music Source Separation Tool with Pre-Trained Models.” In, 2.
Kalchbrenner, Nal, Erich Elsen, Karen Simonyan, Seb Noury, Norman Casagrande, Edward Lockhart, Florian Stimberg, Aaron van den Oord, Sander Dieleman, and Koray Kavukcuoglu. 2018. “Efficient Neural Audio Synthesis,” February. http://arxiv.org/abs/1802.08435.
Lattner, Stefan, Monika Dorfler, and Andreas Arzt. 2019. “Learning Complex Basis Functions for Invariant Representations of Audio.” In Proceedings of the 20th Conference of the International Society for Music Information Retrieval, 8. http://archives.ismir.net/ismir2019/paper/000085.pdf.
Lattner, Stefan, and Maarten Grachten. 2019. “High-Level Control of Drum Track Generation Using Learned Patterns of Rhythmic Interaction.” In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA 2019). http://arxiv.org/abs/1908.00948.
López-Serrano, Patricio, Christian Dittmar, Yigitcan Özer, and Meinard Müller. 2019. “NMF Toolbox: Music Processing Applications of Nonnegative Matrix Factorization.” In.
Luo, Yin-Jyun, Kat Agres, and Dorien Herremans. 2019. “Learning Disentangled Representations of Timbre and Pitch for Musical Instrument Sounds Using Gaussian Mixture Variational Autoencoders.” In Proceedings of the 20th Conference of the International Society for Music Information Retrieval. http://arxiv.org/abs/1906.08152.
MacKinlay, Daniel. 2019. “Mosaic Style Transfer Using Sparse Autocorrelograms.” In Proceedings of the 20th Conference of the International Society for Music Information Retrieval, 5. Delft. http://archives.ismir.net/ismir2019/paper/000109.pdf.
Nieto, Oriol, Matthew McCallum, Matthew E P Davies, Andrew Robertson, Adam Stark, and Eran Egozy. 2019. “The Harmonix Set: Beats, Downbeats, and Functional Segment Annotations of Western Popular Music.” In, 8.
Pati, Ashis, Alexander Lerch, and Gaëtan Hadjeres. 2019. “Learning to Traverse Latent Spaces for Musical Score Inpainting.” In Proceedings of the 20th Conference of the International Society for Music Information Retrieval. http://arxiv.org/abs/1907.01164.
Pfleiderer, Martin, Klaus Frieler, Jakob Abeßer, Wolf-Georg Zaddach, and Benjamin Burkhart, eds. 2017. Inside the Jazzomat - New Perspectives for Jazz Research. Schott Campus.
Robinson, Kyle, and Dan Brown. 2019. “Automated Time-Frequency Domain Audio Crossfades Using Graph Cuts.” In, 2.
Smaragdis, Paris. 2004. “Non-Negative Matrix Factor Deconvolution; Extraction of Multiple Sound Sources from Monophonic Inputs.” In Independent Component Analysis and Blind Signal Separation, edited by Carlos G. Puntonet and Alberto Prieto, 494–99. Lecture Notes in Computer Science. Granada, Spain: Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-540-30110-3_63.
Smith, Jordan B L, Yuta Kawasaki, and Masataka Goto. 2019. “Unmixer: An Interface for Extracting and Remixing Loops.” In, 8.
Stöter, Fabian-Robert, Stefan Uhlich, Antoine Liutkus, and Yuki Mitsufuji. 2019. “Open-Unmix - A Reference Implementation for Music Source Separation.” Journal of Open Source Software 4 (41): 1667. https://doi.org/10.21105/joss.01667.
Thickstun, John, Zaid Harchaoui, Dean P Foster, and Sham M Kakade. 2019. “Coupled Recurrent Models for Polyphonic Music Composition.” In, 8.
Tsuchida, Shuhei, Satoru Fukayama, Masahiro Hamasaki, and Masataka Goto. 2019. “AIST Dance Video Database: Multi-Genre, Multi-Dancer, and Multi-Camera Databasefor Dance Information Processing.” In, 10.
Wei, I-Chieh, Chih-Wei Wu, and Li Su. 2019. “Generating Structured Drum Pattern Using Variational Autoencoder and Self-Similarity Matrix.” In, 8.
Yang, Ruihan, Dingsu Wang, Ziyu Wang, Tianyao Chen, Junyan Jiang, and Gus Xia. 2019. “Deep Music Analogy via Latent Representation Disentanglement.” In Proceedings of the 20th Conference of the International Society for Music Information Retrieval. http://arxiv.org/abs/1906.03626.