Concatenative synthesis



Transferring timbre from one sound to another; Synthesis by example. When you refer to “concatenative synthesis” or an “Audio mosaic”, you usually mean using a granular synthesis method. This being the epoch of neural networks, someone will probably get style transfer for audio functioning soon.

I’ve published in this area. See Mosaic Style Transfer using Sparse Autocorrelograms.

The most comprehensive overview of classic concatenative stuff IMO is contained in Graham Coleman’s doctoral dissertation, Coleman (2015), which frames it in terms of loss functions and descriptors.

There are a few classic implementations about.

Audio analogies

Related: analysis-resynthesis, learning gamelan.

References

Amatriain, Xavier, Jordi Bonada, Agrave]lex Loscos, Josep Lluís Arcos, and Vincent Verfaille. 2003. Content-Based Transformations.” Journal of New Music Research 32 (1): 95–114.
Aucouturier, Jean-Julien, and François Pachet. 2006. Jamming with Plunderphonics: Interactive Concatenative Synthesis of Music.” Journal of New Music Research 35 (1): 35–50.
Blumensath, Thomas, and Mike Davies. 2004. On Shift-Invariant Sparse Coding.” In Independent Component Analysis and Blind Signal Separation, edited by Carlos G. Puntonet and Alberto Prieto, 3195:1205–12. Berlin, Heidelberg: Springer Berlin Heidelberg.
———. 2006. Sparse and Shift-Invariant Representations of Music.” IEEE Transactions on Audio, Speech and Language Processing 14 (1): 50–57.
Coleman, Graham Keith. 2015. Descriptor control of sound transformations and mosaicing synthesis.”
Collins, Nick. 2012. Even More Errant Sound Synthesis.” In.
Collins, Nick, and Bob L. Sturm. 2011. Sound Cross-Synthesis and Morphing Using Dictionary-Based Methods.” In International Computer Music Conference.
Cont, Arshia, Shlomo Dubnov, and Gerard Assayag. 2007. GUIDAGE: A Fast Audio Query Guided Assemblage.” In. ICMA.
Driedger, Jonathan, Mathias Muller, and Sebastian Ewert. 2014. Improving Time-Scale Modification of Music Signals Using Harmonic-Percussive Separation.” IEEE Signal Processing Letters 21 (1): 105–9.
Ellis, D.P.W., C.V. Cotton, and M.I. Mandel. 2008. Cross-Correlation of Beat-Synchronous Representations for Music Similarity.” In IEEE International Conference on Acoustics, Speech and Signal Processing, 2008. ICASSP 2008, 57–60.
Forrester, Alexander I. J., and Andy J. Keane. 2009. Recent Advances in Surrogate-Based Optimization.” Progress in Aerospace Sciences 45 (1–3): 50–79.
Gatys, Leon A., Alexander S. Ecker, and Matthias Bethge. 2015. A Neural Algorithm of Artistic Style.” arXiv:1508.06576 [Cs, q-Bio], August.
Green, D., and S. Bass. 1984. Representing Periodic Waveforms with Nonorthogonal Basis Functions.” IEEE Transactions on Circuits and Systems 31 (6): 518–34.
Kersten, Stefan, and Hendrik Purwins. 2010. Sound Texture Synthesis with Hidden Markov Tree Models in the Wavelet Domain.” In.
Kowalski, M., K. Siedenburg, and M. Dorfler. 2013. Social Sparsity! Neighborhood Systems Enrich Structured Shrinkage Operators.” IEEE Transactions on Signal Processing 61 (10): 2498–2511.
Kronland-Martinet, R., Ph. Guillemain, and S. Ystad. 1997. Modelling of Natural Sounds by Time–Frequency and Wavelet Representations.” Organised Sound 2 (03): 179–91.
Masri, Paul, Andrew Bateman, and Nishan Canagarajah. 1997a. A Review of Time–Frequency Representations, with Application to Sound/Music Analysis–Resynthesis.” Organised Sound 2 (03): 193–205.
———. 1997b. The Importance of the Time–Frequency Representation for Sound/Music Analysis–Resynthesis.” Organised Sound 2 (03): 207–14.
Mital, Parag K., Mick Grierson, and Tim J. Smith. 2013. Corpus-Based Visual Synthesis: An Approach for Artistic Stylization.” In, 51. ACM Press.
Neidinger, R. 2010. Introduction to Automatic Differentiation and MATLAB Object-Oriented Programming.” SIAM Review 52 (3): 545–63.
Queipo, Nestor V., Raphael T. Haftka, Wei Shyy, Tushar Goel, Rajkumar Vaidyanathan, and P. Kevin Tucker. 2005. Surrogate-Based Analysis and Optimization.” Progress in Aerospace Sciences 41 (1): 1–28.
Rebollo-Neira, L., and D. Lowe. 2002. Optimized Orthogonal Matching Pursuit Approach.” IEEE Signal Processing Letters 9 (4): 137–40.
Roma, Gerard, Owen Green, and Pierre Alexandre Tremblay. 2020. Audio Morphing Using Matrix Decomposition and Optimal Transport.” In DAFX 2020, 8.
Schwarz, Diemo. 2007. Corpus-Based Concatenative Synthesis.” IEEE Signal Processing Magazine 24 (2): 92–104.
———. 2011. State of the Art in Sound Texture Synthesis.” In Proceedings of DAFx-11, 221–31.
Simon, Ian, Sumit Basu, David Salesin, and Maneesh Agrawala. 2005. Audio Analogies: Creating New Music from an Existing Performance by Concatenative Synthesis.” In Proceedings of the 2005 International Computer Music Conference, 65–72.
Sturm, B. L., J. J. Shynk, L. Daudet, and C. Roads. 2008. Dark Energy in Sparse Atomic Estimations.” Trans. Audio, Speech and Lang. Proc. 16 (3): 671–76.
Sturm, Bob L. 2006. Adaptive Concatenative Sound Synthesis and Its Application to Micromontage Composition.” Computer Music Journal 30 (4): 46–66.
———. 2011. Sparse Vector Distributions and Recovery from Compressed Sensing,” March.
Sturm, Bob L., Curtis Roads, Aaron McLeran, and John J. Shynk. 2009. Analysis, Visualization, and Transformation of Audio Signals Using Dictionary-Based Methods.” Journal of New Music Research 38 (4): 325–41.
Tachibana, Hideyuki, Nobutaka Ono, and Shigeki Sagayama. 2014. Singing Voice Enhancement in Monaural Music Signals Based on Two-Stage Harmonic/Percussive Sound Separation on Multiple Resolution Spectrograms.” Audio, Speech, and Language Processing, IEEE/ACM Transactions on 22 (1): 228–37.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.