In late May, a small crowd at St. Dunstan's church in East London's Stepney district gathered for two hours of traditional Irish music. But this event was different; the tunes it featured were composed, in part, by an artificial intelligence (AI) algorithm, dubbed folk-rnn, a stark reminder of how cutting-edge AI is gradually permeating every aspect of human life and culture—even creativity.
Developed by researchers at Kingston University and Queen Mary University of London, folk-rnn is one of numerous projects exploring the intersection of artificial intelligence and creative arts. Folk-rnn's performance was met with a mixture of fascination, awe, and consternation at seeing soulless machines conquering something widely considered to be the exclusive domain of human intelligence. But these expeditions are discovering new ways that man and machine can cooperate.
How Does AI Create Art?
Like many other AI products, folk-rnn uses machine learning algorithms, a subset of artificial intelligence. Instead of relying on predefined rules, machine learning ingests large data sets and creates mathematical representations of the patterns and correlations it finds, which it then uses to accomplish tasks.
Folk-rnn was trained with a crowd-sourced repertoire of 23,000 Irish music transcripts before starting to crank out its own tunes. Since its inception in 2015, folk-rnn has undergone three iterations and has produced more than 100,000 songs, many of which have been compiled in an 14-volume online compendium.
Flow Machines, a five-year project funded by the European Research Council and coordinated by Sony's Computer Science Labs, also applied AI algorithms to music. Its most notable—and bizarre—achievement is "Daddy's Car," a song generated by an algorithm that was trained with lead sheets from 40 of The Beatles' hit songs.
Algorithms can mimic the style and feel of a musical genre, but they often make basic mistakes a human composer would not. In fact, most of the pieces played at folk-rnn's debut were tweaked by human musicians.
"Art is not a well-defined problem, because you never know exactly what you want," says Francois Pachet, who served as the lead researcher at Flow Machines and is now director of Spotify's Creator Technology Research Lab. But, he adds cheerfully, "it's good actually that art is not well defined. Otherwise, it would not be art."
The generated lead sheet for "Daddy's Car" was also edited by a human musician, and some tracks were added by hand. "There was pretty much a lot of AI in there, but not everything," Pachet says, "including voice lyrics and structure, and of course the whole mix and production."
"The real benefit is coming up with sequences that aren't expected, and that lead to musically interesting ideas," says Bob Sturm, a lecturer in digital media at Queen Mary, University of London who worked on folk-rnn. "We want the system to create mistakes, but the right kind of mistakes."
Daren Banarsë, an Irish musician who examined and played some of the tunes generated by folk-rnn, attested to the benefits of interesting mistakes. "There was one reel which intrigued me," he says. "The melody kept oscillating between major and minor, in a somewhat random fashion. Stylistically, it was incorrect, but it was quirky, something I wouldn't have thought of myself."
Spotify's Pachet explains that these unexpected twists can actually help improve the quality of pop music. "Take the 30 or 50 most popular songs on YouTube. If you look at the melody, the harmony, the rhythm and the structure, they are extremely conventional, which is quite depressing. You have only three or four chords, and they're always the same. Creative AI is very interesting, not only because it's fun, but also because it brings hope. I hope that we could change or impact the quality of the most popular songs today."
No Right Answers
"The thing that makes art wonderful for humanity is that there is no right answer—it's entirely subjective," says Drew Silverstein, CEO and co-founder of Amper Music, an AI startup based in New York. "You and I might listen to the exact same piece of music, and you might like it, and I might hate it, and neither of us is right or wrong. It's just different.
"The challenge in the modern world is to build an AI that is capable of reflecting that subjectivity," he adds. "Interestingly, sometimes, neural networks and purely data-driven approaches are not the right answer."
Oded Ben-Tal, senior lecturer in music technology at Kingston University and a researcher for folk-rnn, points out another challenge AI faces in respect to creating music: Data does not represent everything.
"In some ways, you can say music is information. We listen to a lot of music, and as a composer, I get inspired by what I hear to make new music," Ben-Tal says. "But the translation into data is a big stumbling block and a big problem in that analogy. Because no data actually captures all the music."
To put it simply, an AI algorithm's interpretation and understanding of music and arts is very different from that of humans.
"In the case of our system, it's far too easy to fall into the trap of saying it's learning the style or it's learning aspects of Irish music, when in fact it's not doing that," says Sturm. "It's learning very abstract representations of this kind of music. And these abstract representations have very little to do with how you experience the music, how a composer puts them together in the context of this music within the tradition.
"Humans are necessary in the pursuit because, at the end of the day, we have to make decisions on whether to incorporate certain things produced by the computer that we curate from this output and create new music," Sturm says.
In visual arts, the divide between the perception of humans and machines is even more accentuated. For instance, take DeepDream, an inside-out version of Google's super-efficient image-classification algorithm. When you give it a photo, it looks for familiar patterns and modifies the image to look more like the things it has identified. This can be useful to turn rough sketches into more enhanced drawings, but it yields unexpected results when left to its own devices. If you provide DeepDream with an image of your face and it finds a pattern that looks like a dog, it'll turn a part of your face into a dog.
"It's almost like the neural net is hallucinating," an artist who interned at Google's DeepMind AI lab said about the software in an interview with Wired last year. "It sees dogs everywhere!"
But AI-generated art often looks stunning and can rake in thousands of dollars at auctions. At a San Francisco art show held last year, paintings created with the help of Google's DeepDream sold for up to $8,000.
The Business of Creative AI
While researchers and scientists continue to explore creative AI, a handful of startups have already moved into the space and are offering products that solve specific business use cases. One is Silverstein's Amper Music, which he describes as a "composer, producer, performer that creates unique professional music tailored to any content in a matter of seconds."
To create music with Amper, you specify the desired mood, length, and genre. The AI produces a basic composition in a few seconds that you can tweak and adjust. Amper also offers an application programming interface (API), so developers can incorporate the platform's creative power into their software.
Jukedeck, a London-based startup created by two former Cambridge University students, provides a similar service. Like Amper, users provide Jukedeck with basic parameters, and it provides them an original musical track.
The main customers of both companies are businesses that require "functional music," the type used in ads, video games, presentations, and YouTube videos. Jukedeck has created more than 500,000 tracks for customers including Coca-Cola, Google, and London's Natural History Museum. Composers are also learning to use the tools to enhance the music they create for their customers.
A third startup, Australia-based Popgun, is building an AI musician that can play music with humans. Named Alice, the AI listens to what you play and then responds instantly with a unique creation that fits with what you played.
In the visual arts industry, business use cases are gradually emerging. Last year, Adobe introduced Sensei, an AI platform aimed at improving human creativity. Sensei assists artists in a number of ways, such as automatically removing the background of photos or finding stock images based on the context of a poster or sketch.
Collaboration Between AI and Human Artists
Perhaps not surprisingly, these startups are founded and managed by people who have strong backgrounds as artists. Amper's Silverstein studied music composition and theory at Vanderbilt University and composed music for TV, films, and video games. Ed Newton-Rex, founder and CEO of Jukedeck, is also a practiced music composer.
But not everyone is convinced of the positive role of artificial intelligence in arts. Some of the attendees at folk_rnn's event described the AI-generated pieces as lacking in "spirit, emotion and passion." Others expressed concerned for the "cultural impact and the loss of the human beauty and understanding of music."
"I haven't met one musician that I've told about this who hasn't reacted with something close to the negative side of things," said Úna Monaghan, a composer and researcher involved in folk-rnn who spoke to Inverse. "Their reaction has been from slightly negative, to outright 'why are you doing this?'"
The developers of creative AI algorithms do not generally share these concerns. "I don't think humans will become redundant in music-making," says Newton-Rex. "For a start, we as listeners care about much more than just the music we're listening to; we care about the artist, and about their story. That will always be the case."
"We think of functional music as music that is valued for its use case and not for the creativity or collaboration that went into making it," Silverstein says. But artistic music, Silverstein explains, "is much more about the process than the use case. Steven Spielberg and John Williams writing the score of Star Wars, that's about a human collaboration."
"The key use-cases we see lie in collaboration with musicians," says Jack Nolan, co-founder of Popgun. "Artists can use Alice as a source of creative inspiration or to help them come up with melodies and chord progressions in their music. We don't think people will ever stop wanting to create their own sounds. We think AI will help them do this, rather than replace them."
Daren Banarsë agrees on the benefits of collaboration. "I always find it daunting when I have to start a large-scale composition. Maybe I could give the computer a few parameters: the number of players, the mood, even the names of some of my favorite composers, and it could generate a basic structure for me," he says. "I wouldn't expect it to work out of the box, but it would be a starting point. Or it could output a selection of melodic ideas or chord progressions for me to look through. And somewhere in there, there's going to be a computer glitch or random quirk, which could take me in a completely unexpected direction."
Ben-Tal admits that some jobs might be affected. "Working musicians will have to adapt," he says. "I show this to my students and say, 'You need to up your game.' This will mean some of the entry-level jobs into the music industry will not be there in five or ten years, or you'll need to do things differently or have a different set of skills."
AI creativity can also help people without inherent talent or hard-earned skills express themselves artistically. Take Vincent's AI drawing platform, which helps transform rough sketches into professional-looking paintings, and the AI music platforms that create decent music with minimal input.
- You Might Not Get Any Smarter in 2018, But AI Will You Might Not Get Any Smarter in 2018, But AI Will
Jukedeck's Newton-Rex describes this as "democratizing" creativity. "People with less formal musical education can get to grips with the basics of music and use AI to help them make music," he says.
Pachet concurs. He draws an analogy between recent AI developments and the arrival of the first digital synthesizers in the 80s, followed by digital samplers. At the time, there was a similar fear that musicians would lose their jobs to computers. "But what happened was the exact opposite, in a sense that everyone took these new machines and hardware with them and learned how to use them productively," he says. "The music industry exploded in some sense."
"There will be more people doing music, and hopefully more interesting music," he adds, reflecting back on AI creativity. "I cannot predict the future, but I'm not worried about AI replacing artists. I'm worried about all the other things, the well-defined problems, like automated healthcare and autonomous vehicles. These things are really going to destroy jobs. But for the creative domains, I don't think it's going to happen."