Arpeggiate by numbers

Workaday automatic composition and sequencing



Where my audio software frameworks page does more DSP, this is mostly about MIDI—choosing notes, not timbres. A cousin of generative art with machine learning, with less AI and more UX.

Sometime you don’t want to measure a chord, or hear a chord, you just want to write a chord.

See also machine listening, musical corpora, musical metrics, synchronisation. The discrete, symbolic cousin to analysis/resynthesis.

Related projects: How I would do generative art with neural networks and learning gamelan.

Colin Morris’ SongSim visualises lyrics, in fact, not notes, but wow don’t they look handsome?

Sonification

MIDITime maps time series data onto notes with some basic music theory baked in.

Geometric approaches

Dmitri Tymozcko claims, music data is most naturally regarded as existing on an orbifold (“quotient manifold”), which I’m sure you could do some clever regression upon, but I can’t yet see how. Orbifolds are, AFAICT, something like what you get when you have a bag of regressors instead of a tuple, and evoke the string bag models of the natural language information retrieval people, except there is not as much hustle for music as there is for NLP. Nonetheless manifold regression is a thing, and regression on manifolds is also a thing, so there is surely some stuff to be done there. Various complications arise; For example: it’s not a single scalar (which note) we are predicting at each time step, but some kind of joint distribution over several notes and their articulations. Or it is even sane to do this one step at a time? Lengthy melodic figures and motifs dominate in real compositions; how do you represent those tractably?

Further, it’s the joint distribution of the evolution of the harmonics and the noise and all that other timbral content that our ear can resolve, not just the symbolic melody. And we know from psycho-acoustics that these will be coupled— dissonance of two tones depends on frequency and amplitude of the spectral components of each, to name one commonly-postulated factor.

In any case, these wrinkles aside, if I could predict the conditional distribution of the sequence in a way that produced recognisably musical sound, then simulate from it, I would be happy for a variety of reasons. So I guess if I cracked this problem in the apparently direct way it might be by “nonparametric vector regression on an orbifold”, but with possibly heroic computation-wrangling en route.

Neural approaches

Musenet is a famous current one from OpenAI

We’ve created MuseNet, a deep neural network that can generate 4-minute musical compositions with 10 different instruments, and can combine styles from country to Mozart to the Beatles. MuseNet was not explicitly programmed with our understanding of music, but instead discovered patterns of harmony, rhythm, and style by learning to predict the next token in hundreds of thousands of MIDI files. MuseNet uses the same general-purpose unsupervised technology as GPT-2, a large-scale transformer model trained to predict the next token in a sequence, whether audio or text.

Google’s Magenta produces some sorta-interesting stuff, or at least stuff I always feel sis so close to actually being interesting without quite making it. Midi-me a light trsnfer learning(?) approach to personalising an overfit or overlarge midi composition model looks like a potentially nice hack, for example.

Composition assistants

Nestup

Totally my jam! A context-free tempo grammar for musical beats. Free.

Nestup. cutelabnyc/nested-tuplets: Fancy javascript for manipulating nested tuplets.

Scaler

Scaler 2,

Scaler 2 can listen to incoming MIDI or audio data and detect the key your music is in. It will suggest chords and chord progressions that will fit with your song. Scaler 2 can send MIDI to a virtual instrument in your DAW, but it also has 30 onboard instruments to play with as well.

J74

⭐⭐⭐⭐ (EUR12 + EUR15)

J74 progressive and J74 bassline by Fabrizio Poce’s apps are chord progression generators built with Ableton Live’s scripting engine, so if you are using Ableton they might be handy. I was using them myself, although I quit Ableton for bitwig, and although I enjoyed them I also don’t miss them. They did make Ableton crash on occasion, so not suited for live performance, which is a pity because that would be a wonderful value proposition if it were. The real-time-oriented J74 HarmoTools from the same guy are less sophisticated but worth trying, especially since they are free, and he has lot of other clever hacks there too. Do go to this guy’s site and try his stuff out.

Helio

⭐⭐⭐⭐ Free

Helio in action

Helio is free and cross platform and totally worth a shot. There is a chord model in there and version control (!) but you might not notice the chord thing if you aren’t careful, because the UI is idiosyncratic. Great for left-field inspiration, if not a universal composition tool.

Orca

Free, open source. ⭐⭐⭐⭐

orca in action

Orca is a bespoke, opinionated weird grid-based programmable sequencer. It doesn’t aspire to solve every composition problem, but it does guarantee weird, individual, quirky algorithmic mayhem. It’s made by two people who live on a boat.

It can run in a browser.

Hookpad

⭐⭐⭐ (Freemium/USD149)

Hookpad is a spin-off of cult pop analysis website Hook Theory. I ran into it later than Odesi, so I frame my review in terms of Odesi, but it might be older. Compared to Odesi it has the same weakness by being a webapp. However, by being basically, just a webpage instead of a multi-gigabyte monster app with the restrictions of a web page, it is less aggravating than Odesi. It assumes a little (but not much) more music theory from the user. Also a plus, it is attached to an excellent library of pop song chord procession examples and analysis in the form of the (recommended) Hook Theory site.

Odesi

⭐⭐ (USD49)

odesi in action

Odesi has been doing lots of advertising of their poptastic interface to generate pop music. It’s like Synfire-lite, with a library of top-40 melody tricks and rhythms. The desktop version tries to install gigabytes of synths of meagre merit on your machine, which is a giant waste of space and time if you are using a computer which already has synths on, which you are because this is not the 90s, and in any case you presumably have this app because you are already a music producer and therefore already have synths. However, unlike 90s apps, it requires you to be online, which is dissatisfying if you like to be offline in your studio so you can get shit done without distractions. So aggressive is it in its desire to be online, that any momentary interruption in your internet connection causes the interface to hang for 30 seconds, presenting you with a reassurance that none of your work is lost. Then it reloads, with some of your work nonetheless lost. A good idea marred by an irritating execution that somehow combines the worst of webapps and desktop apps.

Intermorphic

USD25/USD40

Intermorphic’s Mixtikl and Noatikl are granddaddy esoteric composer apps, although the creators doubtless put much effort into their groundbreaking user interfaces, I nonetheless have not used them because of the marketing material, which is notable for a inability to explain their app or provide compelling demonstrations or use cases. I get the feeling they had high-art aspirations but have ended up doing ambient noodles in order to sell product. Maybe I’m not being fair. If I find spare cash at some point I will find out.

Rozeta

Ruismaker’s Rozeta (iOS) has a series of apps producing every nifty fashionable sequencer algorithm in recent memory. I don’t have an ipad though, so I will not review them.

Rapid compose

Rapid Compose (USD99/USD249) might make decent software, but can’t clearly explain why their app is nice or provide a demo version.

Synfire

EUR996, so I won’t be buying it, but wow, what a demo video.

synfire explains how it uses music theory to do large-scale scoring etc. Get the string section to behave itself or you’ll replace them with MIDIbots.

Harmony Builder

USD39-USD219 depending on heinously complex pricing schemes.

Harmony builder does classical music theory for you. Will pass your conservatorium finals.

Roll your own

You can’t resist rolling your own?

  • sharp11 is a node.js music theory library for javascript with demo application to create jazz improv.

  • Supercollider of course does this like it does everything else, which is to say, quirkily-badly. Designing user interfaces for it takes years off your life. OTOH, if you are happy with text, this might be a goer.

Arpeggiators

Constraint Composition

All of that too mainstream? Try a weird alternative formalism! How about constraint composition? That is, declarative musical composition by defining constraints on the relations which the notes must satisfy. Sounds fun in the abstract but the practice doesn’t grab me especially as a creative tool.

The reference tool for that purpose seem to be strasheela built on an obscure, unpopular, and apparently discontinued Prolog-like language called “Oz” or “Mozart”, because using popular languages is not a grand a gesture as claiming none of them are quite Turing-complete enough, in the right way, for your special snowflake application. That language is a ghost town, which means headaches if you wish to use Strasheela practice; If you wanted to actually use constraint methods, you’d probably use overtone + minikanren (prolog-for-lisp), as with the composing schemer, or to be even more mainstream, just use a conventional constraint solver in a popular language. I am fond of python and ncvx, but there are many choices.

Anyway, prolog fans can read on: see Anders and Miranda (2010);Anders and Miranda (2011).

Random ideas

  • How would you reconstruct a piece from its recurrence matrix? or at least constrain pieces by their recurrence matrix?

References

Anders, Torsten, and Eduardo R. Miranda. 2010. Constraint Application with Higher-Order Programming for Modeling Music Theories.” Computer Music Journal 34 (2): 25–38.
———. 2011. Constraint Programming Systems for Modeling Music Theories and Composition.” ACM Computing Surveys 43 (4): 1–38.
Baddeley, A. J., Marie-Colette NM Van Lieshout, and J. Møller. 1996. Markov Properties of Cluster Processes.” Advances in Applied Probability 28 (2): 346–55.
Baddeley, Adrian J, Jesper Møller, and Rasmus Plenge Waagepetersen. 2000. Non- and Semi-Parametric Estimation of Interaction in Inhomogeneous Point Patterns.” Statistica Neerlandica 54 (3): 329–50.
Bigo, Louis, Jean-Louis Giavitto, and Antoine Spicher. 2011. Building Topological Spaces for Musical Objects.” In Proceedings of the Third International Conference on Mathematics and Computation in Music, 13–28. MCM’11. Berlin, Heidelberg: Springer-Verlag.
Bod, Rens. 2001. What Is the Minimal Set of Fragments That Achieves Maximal Parse Accuracy? In Proceedings of the 39th Annual Meeting on Association for Computational Linguistics, 66–73. ACL ’01. Stroudsburg, PA, USA: Association for Computational Linguistics.
———. 2002a. A Unified Model of Structural Organization in Language and Music.” Journal of Artificial Intelligence Research 17 (2002): 289–308.
———. 2002b. Memory-Based Models of Melodic Analysis: Challenging the Gestalt Principles.” Journal of New Music Research 31 (1): 27–36.
Boggs, Paul T., and Janet E. Rogers. 1990. Orthogonal Distance Regression.” Contemporary Mathematics 112: 183–94.
Borghuis, Tijn, Alessandro Tibo, Simone Conforti, Luca Canciello, Lorenzo Brusci, and Paolo Frasconi. 2018. Off the Beaten Track: Using Deep Learning to Interpolate Between Music Genres.” arXiv:1804.09808 [Cs, Eess], April.
Borgs, Christian, Jennifer T. Chayes, Henry Cohn, and Yufei Zhao. 2014. An \(L^p\) Theory of Sparse Graph Convergence I: Limits, Sparse Random Graph Models, and Power Law Distributions.” arXiv:1401.2906 [Math], January.
Boulanger-Lewandowski, Nicolas, Yoshua Bengio, and Pascal Vincent. 2012. Modeling Temporal Dependencies in High-Dimensional Sequences: Application to Polyphonic Music Generation and Transcription.” In 29th International Conference on Machine Learning.
Brette, Romain. 2008. Generation of Correlated Spike Trains.” Neural Computation 0 (0): 080804143617793–28.
Briot, Jean-Pierre, Gaëtan Hadjeres, and François Pachet. 2017. Deep Learning Techniques for Music Generation - A Survey.” arXiv:1709.01620 [Cs], September.
Budney, Ryan, and William Sethares. 2014. Topology of Musical Data.” Journal of Mathematics and Music 8 (1): 73–92.
Collins, Michael, and Nigel Duffy. 2002. Convolution Kernels for Natural Language.” In Advances in Neural Information Processing Systems 14, edited by T. G. Dietterich, S. Becker, and Z. Ghahramani, 625–32. MIT Press.
Croft, John. 2015. Composition Is Not Research.” Tempo 69 (272): 6–11.
Dean, Roger. 2017. Generative Live Music-Making Using Autoregressive Time Series Models: Melodies and Beats.” Journal of Creative Music Systems 1 (2).
Di Lillo, A, G. Motta, and J.A Storer. 2010. A Rotation and Scale Invariant Descriptor for Shape Recognition.” In 2010 17th IEEE International Conference on Image Processing (ICIP), 257–60.
Eigenfeldt, Arne, and Philippe Pasquier. 2013. Considering Vertical and Horizontal Context in Corpus-Based Generative Electronic Dance Music.” In Proceedings of the Fourth International Conference on Computational Creativity. Vol. 72.
Elmsley, Andrew J., Tillman Weyde, and Newton Armstrong. 2017. Generating Time: Rhythmic Perception, Prediction and Production with Recurrent Neural Networks.” Journal of Creative Music Systems 1 (2).
Gashler, Mike, and Tony Martinez. 2011. Tangent Space Guided Intelligent Neighbor Finding.” In, 2617–24. IEEE.
———. 2012. Robust Manifold Learning with CycleCut.” Connection Science 24 (1): 57–69.
Gillick, Jon, Kevin Tang, and Robert M. Keller. 2010. Machine Learning of Jazz Grammars.” Computer Music Journal 34 (3): 56–66.
Gontis, V., and B. Kaulakys. 2004. Multiplicative Point Process as a Model of Trading Activity.” Physica A: Statistical Mechanics and Its Applications 343 (November): 505–14.
Goroshin, Ross, Joan Bruna, Jonathan Tompson, David Eigen, and Yann LeCun. 2014. Unsupervised Learning of Spatiotemporally Coherent Metrics.” arXiv:1412.6056 [Cs], December.
Graves, Alex. 2013. Generating Sequences With Recurrent Neural Networks.” arXiv:1308.0850 [Cs], August.
Hadjeres, Gaëtan, and François Pachet. 2016. DeepBach: A Steerable Model for Bach Chorales Generation.” arXiv:1612.01010 [Cs], December.
Hadjeres, Gaëtan, Jason Sakellariou, and François Pachet. 2016. Style Imitation and Chord Invention in Polyphonic Music with Exponential Families.” arXiv:1609.05152 [Cs], September.
Hall, Rachel Wells. 2008. Geometrical Music Theory.” Science 320 (5874): 328–29.
Harris, Naftali, and Mathias Drton. 2013. PC Algorithm for Nonparanormal Graphical Models.” Journal of Machine Learning Research 14 (1): 3365–83.
Haussler, David. 1999. Convolution Kernels on Discrete Structures.” Technical report, UC Santa Cruz.
Herremans, Dorien, and Ching-Hua Chuan. 2017. Modeling Musical Context with Word2vec.” In Proceedings of the First International Conference on Deep Learning and Music, Anchorage, US, May, 2017.
Hinton, Geoffrey E., Simon Osindero, and Kejie Bao. 2005. Learning Causally Linked Markov Random Fields.” In Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics, 128–35. Citeseer.
Huang, Cheng-Zhi Anna, Ashish Vaswani, Jakob Uszkoreit, Ian Simon, Curtis Hawthorne, Noam Shazeer, Andrew M. Dai, Matthew D. Hoffman, Monica Dinculescu, and Douglas Eck. 2018. Music Transformer: Generating Music with Long-Term Structure,” September.
Huron, David. 1994. Interval-Class Content in Equally Tempered Pitch-Class Sets: Common Scales Exhibit Optimum Tonal Consonance.” Music Perception: An Interdisciplinary Journal 11 (3): 289–305.
Hutchings, P. 2017. Talking Drums: Generating Drum Grooves with Neural Networks.” In arXiv:1706.09558 [Cs].
Jordan, Michael I., and Yair Weiss. 2002. Probabilistic Inference in Graphical Models.” Handbook of Neural Networks and Brain Theory.
Kaulakys, B., V. Gontis, and M. Alaburda. 2005. Point Process Model of \(1∕f\) Noise Vs a Sum of Lorentzians.” Physical Review E 71 (5): 051105.
Kontorovich, Leonid (Aryeh), Corinna Cortes, and Mehryar Mohri. 2008. Kernel Methods for Learning Languages.” Theoretical Computer Science, Algorithmic Learning Theory, 405 (3): 223–36.
Korzeniowski, Filip, David R. W. Sears, and Gerhard Widmer. 2018. A Large-Scale Study of Language Models for Chord Prediction.” arXiv:1804.01849 [Cs, Eess, Stat], April.
Kroese, Dirk P., and Zdravko I. Botev. 2013. Spatial Process Generation.” arXiv:1308.0399 [Stat], August.
Krumin, Michael, and Shy Shoham. 2009. Generation of Spike Trains with Controlled Auto- and Cross-Correlation Functions.” Neural Computation 21 (6): 1642–64.
Lafferty, John, and Larry Wasserman. 2008. Rodeo: Sparse, Greedy Nonparametric Regression.” The Annals of Statistics 36 (1): 28–63.
Lee, Su-In, Varun Ganapathi, and Daphne Koller. 2006. Efficient Structure Learning of Markov Networks Using $ L_1 $-Regularization.” In Advances in Neural Information Processing Systems, 817–24. MIT Press.
Lieshout, Marie-Colette N. M. van. 1996. On Likelihoods for Markov Random Sets and Boolean Models.” In Proceedings of the International Symposium.
Liu, Han, Xi Chen, Larry Wasserman, and John D. Lafferty. 2010. Graph-Valued Regression.” In Advances in Neural Information Processing Systems 23, edited by J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta, 1423–31. Curran Associates, Inc.
Liu, Han, Fang Han, Ming Yuan, John Lafferty, and Larry Wasserman. 2012. The Nonparanormal SKEPTIC.” arXiv:1206.6488 [Cs, Stat], June.
Liu, Han, John Lafferty, and Larry Wasserman. 2009. The Nonparanormal: Semiparametric Estimation of High Dimensional Undirected Graphs.” Journal of Machine Learning Research 10 (December): 2295–2328.
Liu, Han, Kathryn Roeder, and Larry Wasserman. 2010. Stability Approach to Regularization Selection (StARS) for High Dimensional Graphical Models.” In Advances in Neural Information Processing Systems 23, edited by J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta, 1432–40. Curran Associates, Inc.
Lodhi, Huma, Craig Saunders, John Shawe-Taylor, Nello Cristianini, and Chris Watkins. 2002. Text Classification Using String Kernels.” Journal of Machine Learning Research 2 (March): 419–44.
Madjiheurem, Sephora, Lizhen Qu, and Christian Walder. 2016. Chord2Vec: Learning Musical Chord Embeddings.”
Meinshausen, Nicolai, and Peter Bühlmann. 2006. High-Dimensional Graphs and Variable Selection with the Lasso.” The Annals of Statistics 34 (3): 1436–62.
———. 2010. Stability Selection.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 72 (4): 417–73.
Møller, Jesper, and Rasmus Plenge Waagepetersen. 2007. Modern Statistics for Spatial Point Processes.” Scandinavian Journal of Statistics 34 (4): 643–84.
Montanari, Andrea. 2015. Computational Implications of Reducing Data to Sufficient Statistics.” Electronic Journal of Statistics 9 (2): 2370–90.
Moustafa, Karim Abou-, Dale Schuurmans, and Frank Ferrie. 2013. Learning a Metric Space for Neighbourhood Topology Estimation: Application to Manifold Learning.” In Journal of Machine Learning Research, 341–56.
Papadopoulos, Alexandre, François Pachet, Pierre Roy, and Jason Sakellariou. 2015. Exact Sampling for Regular and Markov Constraints with Belief Propagation.” In Principles and Practice of Constraint Programming, 341–50. Lecture Notes in Computer Science. Switzerland: Springer, Cham.
Pollard, Dave. 2004. “Hammersley-Clifford Theorem for Markov Random Fields.”
Possolo, Antonio. 1986. Estimation of Binary Markov Random Fields.” Department of StatisticsPreprints, University of Washington, Seattle.
Rathbun, Stephen L. 1996. Estimation of Poisson Intensity Using Partially Observed Concomitant Variables.” Biometrics, 226–42.
Ravikumar, Pradeep D., Han Liu, John D. Lafferty, and Larry A. Wasserman. 2007. SpAM: Sparse Additive Models.” In NIPS.
Ravikumar, Pradeep, Martin J. Wainwright, and John D. Lafferty. 2010. High-Dimensional Ising Model Selection Using ℓ1-Regularized Logistic Regression.” The Annals of Statistics 38 (3): 1287–1319.
Reese, K., R. Yampolskiy, and A. Elmaghraby. 2012. A Framework for Interactive Generation of Music for Games.” In 2012 17th International Conference on Computer Games (CGAMES), 131–37. CGAMES ’12. Washington, DC, USA: IEEE Computer Society.
Ripley, B. D., and F. P. Kelly. 1977. Markov Point Processes.” Journal of the London Mathematical Society s2-15 (1): 188–92.
Sethares, William A. 1997. Specifying Spectra for Musical Scales.” The Journal of the Acoustical Society of America 102 (4): 2422–31.
Sethares, William A., Andrew J. Milne, Stefan Tiedje, Anthony Prechtl, and James Plamondon. 2009. Spectral Tools for Dynamic Tonality and Audio Morphing.” Computer Music Journal 33 (2): 71–84.
Tillmann, Barbara, Jamshed J. Bharucha, and Emmanuel Bigand. 2000. Implicit Learning of Tonality: A Self-Organizing Approach. Psychological Review 107 (4): 885.
Tsushima, Hiroaki, Eita Nakamura, Katsutoshi Itoyama, and Kazuyoshi Yoshii. 2017. Generative Statistical Models with Self-Emergent Grammar of Chord Sequences.” arXiv:1708.02255 [Cs], August.
Tymoczko, Dmitri. 2006. The Geometry of Musical Chords.” Science 313 (5783): 72–74.
———. 2009. Generalizing Musical Intervals.” Journal of Music Theory 53 (2): 227–54.
Veitch, Victor, and Daniel M. Roy. 2015. The Class of Random Graphs Arising from Exchangeable Random Measures.” arXiv:1512.03099 [Cs, Math, Stat], December.
Walder, Christian, and Dongwoo Kim. 2018. Neural Dynamic Programming for Musical Self Similarity.” In International Conference on Machine Learning, 5105–13. PMLR.
Wasserman, Larry, Mladen Kolar, and Alessandro Rinaldo. 2013. Estimating Undirected Graphs Under Weak Assumptions.” arXiv:1309.6933 [Cs, Math, Stat], September.
Witten, Daniela M., Robert Tibshirani, and Trevor Hastie. 2009. A Penalized Matrix Decomposition, with Applications to Sparse Principal Components and Canonical Correlation Analysis.” Biostatistics, January, kxp008.
Witten, Daniela M, and Robert J. Tibshirani. 2009. Extensions of Sparse Canonical Correlation Analysis with Applications to Genomic Data.” Statistical Applications in Genetics and Molecular Biology 8 (1): 1–27.
Yanchenko, Anna K., and Sayan Mukherjee. 2017. Classical Music Composition Using State Space Models.” arXiv:1708.03822 [Cs], August.
Yang, Li-Chia, Szu-Yu Chou, and Yi-Hsuan Yang. 2017. MidiNet: A Convolutional Generative Adversarial Network for Symbolic-Domain Music Generation.” In Proceedings of the 18th International Society for Music Information Retrieval Conference (ISMIR’2017), Suzhou, China.
Yedidia, J.S., W.T. Freeman, and Y. Weiss. 2005. Constructing Free-Energy Approximations and Generalized Belief Propagation Algorithms.” IEEE Transactions on Information Theory 51 (7): 2282–312.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.