Placeholder.
A special class of generative AI for music. For other alternatives, see nn music.
Here we consider specifically generative music using diffusion models, much like the diffusion image synthesis.
(Chen et al. 2020; Goel et al. 2022; Hernandez-Olivan, Hernandez-Olivan, and Beltran 2022; Kreuk, Taigman, et al. 2022; Kreuk, Synnaeve, et al. 2022; Lee and Han 2021; Pascual et al. 2022; von Platen et al. 2022)
Incoming
MusicGen: Simple and Controllable Music Generation
We tackle the task of conditional music generation. We introduce MusicGen, a single Language Model (LM) that operates over several streams of compressed discrete music representation, i.e., tokens. Unlike prior work, MusicGen is comprised of a single-stage transformer LM together with efficient token interleaving patterns, which eliminates the need for cascading several models, e.g., hierarchically or upsampling. Following this approach, we demonstrate how MusicGen can generate high-quality samples, while being conditioned on textual description or melodic features, allowing better controls over the generated output. We conduct extensive empirical evaluation, considering both automatic and human studies, showing the proposed approach is superior to the evaluated baselines on a standard text-to-music benchmark. Through ablation studies, we shed light over the importance of each of the components comprising MusicGen. Music samples can be found on the supplemental materials. Code and models are available on our repo github.com/facebookresearch/audiocraft.
archinetai/audio-diffusion-pytorch: Audio generation using diffusion models, in PyTorch.
diffusion_models/diffusion_03_waveform.ipynb at main · acids-ircam/diffusion_models
Apple acquires song-shifting startup AI Music, here’s what it could mean for users
No comments yet. Why not leave one?