New Generative AI Tools Open the Doors of Music Creation

Our latest AI music technologies are now available in MusicFX DJ, Music AI Sandbox and YouTube Shorts

For nearly a decade, our teams have been exploring how artificial intelligence (AI) can support the creative process, building tools that empower enthusiasts and professionals to discover new forms of creative expression.

Over the past year, we’ve been working in close collaboration with partners across the music industry through our Music AI Incubator and more. Their input has been guiding our state-of-the-art generative music experiments, and helping us ensure that our new generative AI tools responsibly open the doors of music creation to everyone.

Today, in partnership with Google Labs, we’re releasing a reimagined experience for MusicFX DJ that makes it easier for anyone to generate music, interactively, in real time.

We’re also announcing updates to our music AI toolkit, called Music AI Sandbox, and highlighting our latest AI music technologies in YouTube’s Dream Track, a suite of experiments that creators can use to generate high-quality instrumentals for their Shorts and videos.

Generating live music with MusicFX DJ

At I/O this year, we shared an early preview of MusicFX DJ, a digital tool that anyone can play like an instrument, making the joy of live music creation more accessible to people of all skill levels.

Today, we’re introducing a number of updates to MusicFX DJ, including an expanded set of intuitive controls, a reimagined interface, improved audio quality and new model behaviors. These capabilities let players generate and steer a continuous flow of music, share their creations with friends and play a jam session together.

Working in close collaboration with Jacob Collier — a six-time GRAMMY award-winning singer, songwriter, producer and multi-instrumentalist — we designed these updates to make MusicFX DJ more accessible, useful and inspiring.

A screenshot of the user interface designs for our updated Music AI Sandbox, which has a multi-track view to help organize and refine compositions with precise controls.

Jacob Collier

YouTube’s Dream Track experiment now generates instrumental soundtracks

Building off our ongoing work with YouTube, we’ve evolved our Dream Track experiment to allow U.S. creators to explore a range of genres and prompts that generate instrumental soundtracks with powerful text-to-music models.

Our latest music generation models are trained with a novel reinforcement learning approach to have higher audio quality, while also paying better attention to the nuances of a user’s text prompts. Responsibly deploying generative technologies is core to our values, so all music generated by MusicFX DJ and Dream Track is watermarked using SynthID.

Building the future of music creation together

We’ve been delighted to work with partners in the music community over the past year to help build technology that’s both responsive to the needs of professionals and expands access for the next generation of musicians.

We’re looking forward to deepening these partnerships as we build the future of music creation together, developing even better tools to inspire creativity.

This work was made possible by core research and engineering efforts from Andrea Agostinelli, Zalán Borsos, George Brower, Antoine Caillon, Cătălina Cangea, Noah Constant, Michael Chang, Chris Deaner, Timo Denk, Chris Donahue, Michael Dooley, Jesse Engel, Christian Frank, Beat Gfeller, Tobenna Peter Igwe, Drew Jaegle, Matej Kastelic, Kazuya Kawakami, Pen Li, Ethan Manilow, Yotam Mann, Colin McArdell, Brian McWilliams, Adam Roberts, Matt Sharifi, Ian Simon, Ondrej Skopek, Marco Tagliasacchi, Cassie Tarakajian, Alex Tudor, Victor Ungureanu, Mauro Verzetti, Damien Vincent, Luyu Wang, Björn Winkler, Yan Wu, and Mauricio Zuluaga.

MusicFX DJ was developed by Antoine Caillon, Noah Constant, Jesse Engel, Alberto Lalama, Hema Manickavasagam, Adam Roberts, Ian Simon, and Cassie Tarakajian in collaboration with our partners from Google Labs including Obed Appiah-Agyeman, Tahj Atkinson, Carlie de Boer, Phillip Campion, Sai Kiran Gorthi, Kelly Lau-Kee, Elias Roman, Noah Semus, Trond Wuellner, Kristin Yim, and Jamie Zyskowski. We give our deepest thanks to Jacob Collier, Ben Bloomberg, and Fran Haincourt for their valuable feedback throughout the development process.

Music AI Sandbox was developed by Andrea Agostinelli, George Brower, Ross Cairns (xWF), Michael Chang, Yeawon Choi, Chris Deaner, Jesse Engel, Reed Enger, Beat Gfeller, Tom Hume, Tom Jenkins, Max Edelmann (xWF), Drew Jaegle, Jacob Kelly, DY Kim, David Madras, Hema Manickavasagam, Ethan Manilow, Yotam Mann, Colin McArdell, Chris Reardon, Felix Riedel, Adam Roberts, Arathi Sethumadhavan, Eleni Shaw, Sage Stevens, Amy Stuart, Luyu Wang, Pawel Wluka, and Yan Wu in collaboration with our partners in YouTube and Tech & Society.

Dream Track was developed by Andrea Agostinelli, Zalán Borsos, Geoffrey Cideron, Timo Denk, Michael Dooley, Christian Frank, Sertan Girgin, Myriam Hamed Torres, Matej Kastelic, Pen Li, Brian McWilliams, Matt Sharifi, Ondrej Skopek, Marco Tagliasacchi, Mauro Verzetti, Mauricio Zuluaga, in collaboration with our partners in YouTube.

Special thanks to Aäron van den Oord, Tom Hume, Douglas Eck, Eli Collins, Mira Lane, Koray Kavukcuoglu, and Demis Hassabis for their insightful guidance and support throughout the research process. Thanks to Mahyar Bordbar and DY Kim for helping coordinate these efforts, as well as the YouTube Artist Partnerships team for their support partnering with the music industry.

We also acknowledge the many other individuals who contributed across Google DeepMind and Alphabet, including our partners at YouTube.

Leave a Reply

Your email address will not be published. Required fields are marked *