Gone are the days of writer’s block. Today’s would-be songwriters now have a game-changing tool at their disposal — Artificial Intelligence (AI).
Many now view AI as a partner in music making, enhancing music, and leveling the playing field — enabling artists to experiment with music in ways that would have required working with multiple technical and musical collaborators as well as significant budgets in the past.
As a life-long drummer and student of music, I always took pride in the proverbial “10 000 hours” required to master a craft, thus creating a certain entry bar for would-be musicians.
At the same time, I probably wasted an equal amount of hours struggling with buggy recording software, expensive studios, and collaborating with a mixed bag of musicians (some amazing — some not so).
The prospect of a collaborative AI music partner has its appeals, but is using AI to make music simply cheating?
“Today’s would-be songwriters now have a game-changing tool at their disposal — Artificial Intelligence.”
Prior to the invention of photography, realistic portraits and images of the world could only be produced by highly-skilled painters. Today, we take photography for granted and it is hard to imagine just how amazing it must have been to see a well-executed realistic painting in the 1800s. One could say that photography was one of the major triggers of the modern art movement, giving rise to creative geniuses like Vincent Van Gogh and Pablo Picasso. Without photography, perhaps modern art would never have existed.
Could AI-generated music create a similar catalyst for a musical revolution? And if music creation is reduced to a mechanical process, then what is the artist’s role?
ALYSIA, founded by computer scientists who are also musicians, has a mission to “democratize songwriting through AI,” empowering anyone to write songs.
“People who have never previously written songs are able to create original music in a matter of minutes,” explains Dr Maya Ackerman, CEO/ Co-founder of WaveAI, which created ALYSIA. “The process begins with the user selecting an instrumental backing track and choosing (or entering) topics that the lyrics should discuss. The AI-based lyrics assistant then proposes lyrics one line at a time, which the user can simply select from to piece together the lyrics.”
“More advanced users can edit ALYSIA’s suggestions, or enter their own lyrical lines or melodies,” says Ackerman. “Professional musicians use the platform to break out of writer’s block, since the AI-based system never runs out of fresh ideas.”
“If music creation is reduced to a mechanical process, then what is the artist’s role?”
Google Magenta is an open-source research project started by Google Brain that uses TensorFlow technology to enhance music production. Magenta recently developed NSynth (Neural Synthesizer), a machine learning algorithm that uses a deep neural network to learn the characteristics of sounds, and then create a completely new sound based on these characteristics. NSynth is still in its prototype phase, and researchers are currently working with professional musicians to fine-tune the program.
“The major benefit of working with AI is having control over the creative process and being able to see something from inception to completion,” Southern says. “I don’t have a traditional music background, so having the ability to create music on my own and in my own time is incredibly empowering.”
“I start by making a series of decisions about what BPM, rhythm, key, mood, instrumentation I want — and then essentially giving the AI ’feedback’ or ’notes’ each time it generates a new possibility until I’m happy with the overall song,” Southern explains. “I then download, arrange and mix the stems into a structure. From a creative standpoint, the process of working with AI is quite similar to the process of working with another human — both rely on each other’s talents and inspiration to accomplish a given goal.
AI gives me more creative autonomy in terms of what decisions I make and when a song is ready to be complete.”
Southern argues that using AI to make music is absolutely not cheating. “The idea that a shortcut to any creative process undermines the whole process is the very antithesis of creativity. I imagine in twenty years, ’coding’ songs will be commonplace.”
Should musicians feel threatened by the advent of collaborative AI musicians? World-renowned fusion drummer Jojo Mayer doesn’t think so.
He has built a career by effectively reverse-engineering electronic music and using it to push his physical human limits as a drummer and create an innovative style of drumming.
“Data is made up of ones and zeros,” Mayer says, “and our art and humanity lays in the space between one and zero.”
It was famous jazz musician Miles Davis who said it’s not how many notes you play, but rather the notes you choose not to play that makes a great performance. You can have all the data and AI-powered tools in the world at your fingertips, but choosing what not to do might just be the new “entry bar“ for future artists.
“Data is made up of ones and zeros, and our art and humanity lays in the space between one and zero.” – Jojo Mayer, drummer
Some people, with little understanding of current AI technology or music (or both), worry that AI will make musicians obsolete. I don’t subscribe to this point of view.
I believe these new tools could open enormous creative opportunities for music that won’t replace artists, but will rather empower them.
If this technology could one day help unlock the next Jimi Hendrix, I say bring it on.
Call for video demos: Showcasing digital transformation at the first virtual ITU Kaleidoscope conference