Concatenative synthesis

Transferring timbre from one sound to another; Synthesis by example. When you refer to “concatenative synthesis” or an “Audio mosaic”, you usually mean using a granular synthesis method. This being the epoch of neural networks, someone will probably get style transfer for audio functioning soon.

I’m publishing in this area. See, e.g. Mosaic Style Transfer using Sparse Autocorrelograms.

The most comprehensive overview of classic concatenative stuff IMO is contained in Graham Coleman’s doctoral dissertation, Coleman (2015), which frames it in terms of loss functions and descriptors.

There are a few classic implementations about;

Audio analogies

Related: analysis-resynthesis, learning gamelan.

Amatriain, Xavier, Jordi Bonada, Agrave]lex Loscos, Josep Lluís Arcos, and Vincent Verfaille. 2003. “Content-Based Transformations.” Journal of New Music Research 32 (1): 95–114. https://doi.org/10.1076/jnmr.32.1.95.16800.

Aucouturier, Jean-Julien, and François Pachet. 2006. “Jamming with Plunderphonics: Interactive Concatenative Synthesis of Music.” Journal of New Music Research 35 (1): 35–50. https://doi.org/10.1080/09298210600696790.

Blumensath, Thomas, and Mike Davies. 2004. “On Shift-Invariant Sparse Coding.” In Independent Component Analysis and Blind Signal Separation, edited by Carlos G. Puntonet and Alberto Prieto, 3195:1205–12. Berlin, Heidelberg: Springer Berlin Heidelberg. http://link.springer.com/chapter/10.1007/978-3-540-30110-3_152.

———. 2006. “Sparse and Shift-Invariant Representations of Music.” IEEE Transactions on Audio, Speech and Language Processing 14 (1): 50–57. https://doi.org/10.1109/TSA.2005.860346.

Coleman, Graham Keith. 2015. “Descriptor Control of Sound Transformations and Mosaicing Synthesis.” http://repositori.upf.edu/handle/10230/27367.

Collins, Nick. 2012. “Even More Errant Sound Synthesis.” In. http://community.dur.ac.uk/nick.collins/research/evenmoreerrant.pdf.

Collins, Nick, and Bob L. Sturm. 2011. “Sound Cross-Synthesis and Morphing Using Dictionary-Based Methods.” In International Computer Music Conference. http://vbn.aau.dk/files/77310007/dbmcrossynth.pdf.

Cont, Arshia, Shlomo Dubnov, and Gerard Assayag. 2007. “GUIDAGE: A Fast Audio Query Guided Assemblage.” In. ICMA. https://hal.inria.fr/hal-00839071/document.

Driedger, Jonathan, Mathias Muller, and Sebastian Ewert. 2014. “Improving Time-Scale Modification of Music Signals Using Harmonic-Percussive Separation.” IEEE Signal Processing Letters 21 (1): 105–9. https://doi.org/10.1109/LSP.2013.2294023.

Ellis, D. P. W., C. V. Cotton, and M. I. Mandel. 2008. “Cross-Correlation of Beat-Synchronous Representations for Music Similarity.” In IEEE International Conference on Acoustics, Speech and Signal Processing, 2008. ICASSP 2008, 57–60. https://doi.org/10.1109/ICASSP.2008.4517545.

Forrester, Alexander I. J., and Andy J. Keane. 2009. “Recent Advances in Surrogate-Based Optimization.” Progress in Aerospace Sciences 45 (1–3): 50–79. https://doi.org/10.1016/j.paerosci.2008.11.001.

Gatys, Leon A., Alexander S. Ecker, and Matthias Bethge. 2015. “A Neural Algorithm of Artistic Style,” August. http://arxiv.org/abs/1508.06576.

Green, D., and S. Bass. 1984. “Representing Periodic Waveforms with Nonorthogonal Basis Functions.” IEEE Transactions on Circuits and Systems 31 (6): 518–34. https://doi.org/10.1109/TCS.1984.1085543.

Kersten, Stefan, and Hendrik Purwins. 2010. “Sound Texture Synthesis with Hidden Markov Tree Models in the Wavelet Domain.” In. http://www.mtg.upf.edu/system/files/publications/kersten_sound_texture_synthesis_smc2010.pdf.

Kowalski, M., K. Siedenburg, and M. Dorfler. 2013. “Social Sparsity! Neighborhood Systems Enrich Structured Shrinkage Operators.” IEEE Transactions on Signal Processing 61 (10): 2498–2511. https://doi.org/10.1109/TSP.2013.2250967.

Kronland-Martinet, R., Ph. Guillemain, and S. Ystad. 1997. “Modelling of Natural Sounds by Time–Frequency and Wavelet Representations.” Organised Sound 2 (03): 179–91. https://doi.org/null.

Masri, Paul, Andrew Bateman, and Nishan Canagarajah. 1997a. “A Review of Time–Frequency Representations, with Application to Sound/Music Analysis–Resynthesis.” Organised Sound 2 (03): 193–205. https://doi.org/10.1017/S1355771898009042.

———. 1997b. “The Importance of the Time–Frequency Representation for Sound/Music Analysis–Resynthesis.” Organised Sound 2 (03): 207–14. https://doi.org/10.1017/S1355771898009054.

Mital, Parag K., Mick Grierson, and Tim J. Smith. 2013. “Corpus-Based Visual Synthesis: An Approach for Artistic Stylization.” In, 51. ACM Press. https://doi.org/10.1145/2492494.2492505.

Neidinger, R. 2010. “Introduction to Automatic Differentiation and MATLAB Object-Oriented Programming.” SIAM Review 52 (3): 545–63. https://doi.org/10.1137/080743627.

Queipo, Nestor V., Raphael T. Haftka, Wei Shyy, Tushar Goel, Rajkumar Vaidyanathan, and P. Kevin Tucker. 2005. “Surrogate-Based Analysis and Optimization.” Progress in Aerospace Sciences 41 (1): 1–28. https://doi.org/10.1016/j.paerosci.2005.02.001.

Rebollo-Neira, L., and D. Lowe. 2002. “Optimized Orthogonal Matching Pursuit Approach.” IEEE Signal Processing Letters 9 (4): 137–40. https://doi.org/10.1109/LSP.2002.1001652.

Schwarz, Diemo. 2007. “Corpus-Based Concatenative Synthesis.” IEEE Signal Processing Magazine 24 (2): 92–104. https://doi.org/10.1109/MSP.2007.323274.

———. 2011. “State of the Art in Sound Texture Synthesis.” In Proceedings of DAFx-11, 221–31. http://recherche.ircam.fr/pub/dafx11/Papers/30_e.pdf.

Simon, Ian, Sumit Basu, David Salesin, and Maneesh Agrawala. 2005. “Audio Analogies: Creating New Music from an Existing Performance by Concatenative Synthesis.” In Proceedings of the 2005 International Computer Music Conference, 65–72. http://research.microsoft.com/en-us/um/redmond/groups/cue/compmusic/audioanalogies_icmc2005.pdf.

Sturm, B. L., J. J. Shynk, L. Daudet, and C. Roads. 2008. “Dark Energy in Sparse Atomic Estimations.” Trans. Audio, Speech and Lang. Proc. 16 (3): 671–76. https://doi.org/10.1109/TASL.2007.914975.

Sturm, Bob L. 2006. “Adaptive Concatenative Sound Synthesis and Its Application to Micromontage Composition.” Computer Music Journal 30 (4): 46–66. https://doi.org/10.1162/comj.2006.30.4.46.

———. 2011. “Sparse Vector Distributions and Recovery from Compressed Sensing,” March. https://arxiv.org/abs/1103.6246.

Sturm, Bob L., Curtis Roads, Aaron McLeran, and John J. Shynk. 2009. “Analysis, Visualization, and Transformation of Audio Signals Using Dictionary-Based Methods.” Journal of New Music Research 38 (4): 325–41. https://doi.org/10.1080/09298210903171178.

Tachibana, Hideyuki, Nobutaka Ono, and Shigeki Sagayama. 2014. “Singing Voice Enhancement in Monaural Music Signals Based on Two-Stage Harmonic/Percussive Sound Separation on Multiple Resolution Spectrograms.” Audio, Speech, and Language Processing, IEEE/ACM Transactions on 22 (1): 228–37. https://doi.org/10.1109/TASLP.2013.2287052.