Home / News & Analysis / AI Has Been Creating Music and the Results Are…Weird

AI Has Been Creating Music and the Results Are…Weird

In late May, a small crowd at St. Dunstan's church in East London's Stepney district gathered for two hours of traditional Irish music. But this event was different; the tunes it featured were composed, in part, by an artificial intelligence (AI) algorithm, dubbed folk-rnn, a stark reminder of how cutting-edge AI is gradually permeating every aspect of human life and culture—even creativity.

Developed by researchers at Kingston University and Queen Mary University of London, folk-rnn is one of numerous projects exploring the intersection of artificial intelligence and creative arts. Folk-rnn's performance was met with a mixture of fascination, awe, and consternation at seeing soulless machines conquering something widely considered to be the exclusive domain of human intelligence. But these expeditions are discovering new ways that man and machine can cooperate.

[youtube https://www.youtube.com/watch?v=HOPz71Bx714&w=560&h=315]

How Does AI Create Art?

Like many other AI products, folk-rnn uses machine learning algorithms, a subset of artificial intelligence. Instead of relying on predefined rules, machine learning ingests large data sets and creates mathematical representations of the patterns and correlations it finds, which it then uses to accomplish tasks.

Folk-rnn was trained with a crowd-sourced repertoire of 23,000 Irish music transcripts before starting to crank out its own tunes. Since its inception in 2015, folk-rnn has undergone three iterations and has produced more than 100,000 songs, many of which have been compiled in an 14-volume online compendium.

Flow Machines, a five-year project funded by the European Research Council and coordinated by Sony's Computer Science Labs, also applied AI algorithms to music. Its most notable—and bizarre—achievement is "Daddy's Car," a song generated by an algorithm that was trained with lead sheets from 40 of The Beatles' hit songs.

[youtube https://www.youtube.com/watch?v=LSHZ_b05W7o&w=560&h=315]

Welcome Mistakes

Algorithms can mimic the style and feel of a musical genre, but they often make basic mistakes a human composer would not. In fact, most of the pieces played at folk-rnn's debut were tweaked by human musicians.

"Art is not a well-defined problem, because you never know exactly what you want," says Francois Pachet, who served as the lead researcher at Flow Machines and is now director of Spotify's Creator Technology Research Lab. But, he adds cheerfully, "it's good actually that art is not well defined. Otherwise, it would not be art."

The generated lead sheet for "Daddy's Car" was also edited by a human musician, and some tracks were added by hand. "There was pretty much a lot of AI in there, but not everything," Pachet says, "including voice lyrics and structure, and of course the whole mix and production."

"The real benefit is coming up with sequences that aren't expected, and that lead to musically interesting ideas," says Bob Sturm, a lecturer in digital media at Queen Mary, University of London who worked on folk-rnn. "We want the system to create mistakes, but the right kind of mistakes."

[youtube https://www.youtube.com/watch?v=lZKc363886Y&w=560&h=315]

Daren Banarsë, an Irish musician who examined and played some of the tunes generated by folk-rnn, attested to the benefits of interesting mistakes. "There was one reel which intrigued me," he says. "The melody kept oscillating between major and minor, in a somewhat random fashion. Stylistically, it was incorrect, but it was quirky, something I wouldn't have thought of myself."

Spotify's Pachet explains that these unexpected twists can actually help improve the quality of pop music. "Take the 30 or 50 most popular songs on YouTube. If you look at the melody, the harmony, the rhythm and the structure, they are extremely conventional, which is quite depressing. You have only three or four chords, and they're always the same. Creative AI is very interesting, not only because it's fun, but also because it brings hope. I hope that we could change or impact the quality of the most popular songs today."

No Right Answers

"The thing that makes art wonderful for humanity is that there is no right answer—it's entirely subjective," says Drew Silverstein, CEO and co-founder of Amper Music, an AI startup based in New York. "You and I might listen to the exact same piece of music, and you might like it, and I might hate it, and neither of us is right or wrong. It's just different.

"The challenge in the modern world is to build an AI that is capable of reflecting that subjectivity," he adds. "Interestingly, sometimes, neural networks and purely data-driven approaches are not the right answer."

Oded Ben-Tal, senior lecturer in music technology at Kingston University and a researcher for folk-rnn, points out another challenge AI faces in respect to creating music: Data does not represent everything.

"In some ways, you can say music is information. We listen to a lot of music, and as a composer, I get inspired by what I hear to make new music," Ben-Tal says. "But the translation into data is a big stumbling block and a big problem in that analogy. Because no data actually captures all the music."

To put it simply, an AI algorithm's interpretation and understanding of music and arts is very different from that of humans.

"In the case of our system, it's far too easy to fall into the trap of saying it's learning the style or it's learning aspects of Irish music, when in fact it's not doing that," says Sturm. "It's learning very abstract representations of this kind of music. And these abstract representations have very little to do with how you experience the music, how a composer puts them together in the context of this music within the tradition.

"Humans are necessary in the pursuit because, at the end of the day, we have to make decisions on whether to incorporate certain things produced by the computer that we curate from this output and create new music," Sturm says.

Google's DeepDream

In visual arts, the divide between the perception of humans and machines is even more accentuated. For instance, take DeepDream, an inside-out version of Google's super-efficient image-classification algorithm. When you give it a photo, it looks for familiar patterns and modifies the image to look more like the things it has identified. This can be useful to turn rough sketches into more enhanced drawings, but it yields unexpected results when left to its own devices. If you provide DeepDream with an image of your face and it finds a pattern that looks like a dog, it'll turn a part of your face into a dog.

"It's almost like the neural net is hallucinating," an artist who interned at Google's DeepMind AI lab said about the software in an interview with Wired last year. "It sees dogs everywhere!"

But AI-generated art often looks stunning and can rake in thousands of dollars at auctions. At a San Francisco art show held last year, paintings created with the help of Google's DeepDream sold for up to $8,000.

The Business of Creative AI

While researchers and scientists continue to explore creative AI, a handful of startups have already moved into the space and are offering products that solve specific business use cases. One is Silverstein's Amper Music, which he describes as a "composer, producer, performer that creates unique professional music tailored to any content in a matter of seconds."

To create music with Amper, you specify the desired mood, length, and genre. The AI produces a basic composition in a few seconds that you can tweak and adjust. Amper also offers an application programming interface (API), so developers can incorporate the platform's creative power into their software.

[youtube https://www.youtube.com/watch?v=lyXrU_Qo6UQ&w=560&h=315]

Jukedeck, a London-based startup created by two former Cambridge University students, provides a similar service. Like Amper, users provide Jukedeck with basic parameters, and it provides them an original musical track.

The main customers of both companies are businesses that require "functional music," the type used in ads, video games, presentations, and YouTube videos. Jukedeck has created more than 500,000 tracks for customers including Coca-Cola, Google, and London's Natural History Museum. Composers are also learning to use the tools to enhance the music they create for their customers.

A third startup, Australia-based Popgun, is building an AI musician that can play music with humans. Named Alice, the AI listens to what you play and then responds instantly with a unique creation that fits with what you played.

[youtube https://www.youtube.com/watch?v=y_zUtY05TuM&w=560&h=315]

In the visual arts industry, business use cases are gradually emerging. Last year, Adobe introduced Sensei, an AI platform aimed at improving human creativity. Sensei assists artists in a number of ways, such as automatically removing the background of photos or finding stock images based on the context of a poster or sketch.

Collaboration Between AI and Human Artists

Perhaps not surprisingly, these startups are founded and managed by people who have strong backgrounds as artists. Amper's Silverstein studied music composition and theory at Vanderbilt University and composed music for TV, films, and video games. Ed Newton-Rex, founder and CEO of Jukedeck, is also a practiced music composer.

But not everyone is convinced of the positive role of artificial intelligence in arts. Some of the attendees at folk_rnn's event described the AI-generated pieces as lacking in "spirit, emotion and passion." Others expressed concerned for the "cultural impact and the loss of the human beauty and understanding of music."

"I haven't met one musician that I've told about this who hasn't reacted with something close to the negative side of things," said Úna Monaghan, a composer and researcher involved in folk-rnn who spoke to Inverse. "Their reaction has been from slightly negative, to outright 'why are you doing this?'"

The developers of creative AI algorithms do not generally share these concerns. "I don't think humans will become redundant in music-making," says Newton-Rex. "For a start, we as listeners care about much more than just the music we're listening to; we care about the artist, and about their story. That will always be the case."

Automation and IoT Predictions

"We think of functional music as music that is valued for its use case and not for the creativity or collaboration that went into making it," Silverstein says. But artistic music, Silverstein explains, "is much more about the process than the use case. Steven Spielberg and John Williams writing the score of Star Wars, that's about a human collaboration."

"The key use-cases we see lie in collaboration with musicians," says Jack Nolan, co-founder of Popgun. "Artists can use Alice as a source of creative inspiration or to help them come up with melodies and chord progressions in their music. We don't think people will ever stop wanting to create their own sounds. We think AI will help them do this, rather than replace them."

Daren Banarsë agrees on the benefits of collaboration. "I always find it daunting when I have to start a large-scale composition. Maybe I could give the computer a few parameters: the number of players, the mood, even the names of some of my favorite composers, and it could generate a basic structure for me," he says. "I wouldn't expect it to work out of the box, but it would be a starting point. Or it could output a selection of melodic ideas or chord progressions for me to look through. And somewhere in there, there's going to be a computer glitch or random quirk, which could take me in a completely unexpected direction."

Ben-Tal admits that some jobs might be affected. "Working musicians will have to adapt," he says. "I show this to my students and say, 'You need to up your game.' This will mean some of the entry-level jobs into the music industry will not be there in five or ten years, or you'll need to do things differently or have a different set of skills."

'Democratizing Creativity'

AI creativity can also help people without inherent talent or hard-earned skills express themselves artistically. Take Vincent's AI drawing platform, which helps transform rough sketches into professional-looking paintings, and the AI music platforms that create decent music with minimal input.


Jukedeck's Newton-Rex describes this as "democratizing" creativity. "People with less formal musical education can get to grips with the basics of music and use AI to help them make music," he says.

Pachet concurs. He draws an analogy between recent AI developments and the arrival of the first digital synthesizers in the 80s, followed by digital samplers. At the time, there was a similar fear that musicians would lose their jobs to computers. "But what happened was the exact opposite, in a sense that everyone took these new machines and hardware with them and learned how to use them productively," he says. "The music industry exploded in some sense."

"There will be more people doing music, and hopefully more interesting music," he adds, reflecting back on AI creativity. "I cannot predict the future, but I'm not worried about AI replacing artists. I'm worried about all the other things, the well-defined problems, like automated healthcare and autonomous vehicles. These things are really going to destroy jobs. But for the creative domains, I don't think it's going to happen."

Read more

Check Also

The new era in mobile

Joe Apprendi Contributor Joe Apprendi is a general partner at Revel Partners. More posts by this contributor Big data’s humble beginnings A future dominated by autonomous vehicles (AVs) is, for many experts, a foregone conclusion. Declarations that the automobile will become the next living room are almost as common — but, they are imprecise. In our inevitable driverless future, the more apt comparison is to the mobile device. As with smartphones, operating systems will go a long way toward determining what autonomous vehicles are and what they could be. For mobile app companies trying to seize on the coming AV opportunity, their future depends on how the OS landscape shapes up. By most measures, the mobile app economy is still growing, yet the time people spend using their apps is actually starting to dip. A recent study reported that overall app session activity grew only 6 percent in 2017, down from the 11 percent growth it reported in 2016. This trend suggests users are reaching a saturation point in terms of how much time they can devote to apps. The AV industry could reverse that. But just how mobile apps will penetrate this market and who will hold the keys in this new era of mobility is still very much in doubt. When it comes to a driverless future, multiple factors are now converging. Over the last few years, while app usage showed signs of stagnation, the push for driverless vehicles has only intensified. More cities are live-testing driverless software than ever, and investments in autonomous vehicle technology and software by tech giants like Google and Uber (measured in the billions) are starting to mature. And, after some reluctance, automakers have now embraced this idea of a driverless future. Expectations from all sides point to a “passenger economy” of mobility-as-a-service, which, by some estimates, may be worth as much as $7 trillion by 2050. For mobile app companies this suggests several interesting questions: Will smart cars, like smartphones before them, be forced to go “exclusive” with a single OS of record (Google, Apple, Microsoft, Amazon/AGL), or will they be able to offer multiple OS/platforms of record based on app maturity or functionality? Or, will automakers simply step in to create their own closed loop operating systems, fragmenting the market completely? Automakers and tech companies clearly recognize the importance of “connected mobility.” Complicating the picture even further is the potential significance of an OS’s ability to support multiple Digital Assistants of Record (independent of the OS), as we see with Google Assistant now working on iOS. Obviously, voice NLP/U will be even more critical for smart car applications as compared to smart speakers and phones. Even in those established arenas the battle for OS dominance is only just beginning. Opening a new front in driverless vehicles could have a fascinating impact. Either way, the implications for mobile app companies are significant. Looking at the driverless landscape today there are several indications as to which direction the OSes in AVs will ultimately go. For example, after some initial inroads developing their own fleet of autonomous vehicles, Google has now focused almost all its efforts on autonomous driving software while striking numerous partnership deals with traditional automakers. Some automakers, however, are moving forward developing their own OSes. Volkswagen, for instance, announced that vw.OS will be introduced in VW brand electric cars from 2020 onward, with an eye toward autonomous driving functions. (VW also plans to launch a fleet of autonomous cars in 2019 to rival Uber.) Tesla, a leader in AV, is building its own unified hardware-software stack. Companies like Udacity, however, are building an “open-source” self-driving car tech. Mobileye and Baidu have a partnership in place to provide software for automobile manufacturers. Clearly, most smartphone apps would benefit from native integration, but there are several categories beyond music, voice and navigation that require significant hardware investment to natively integrate. Will automakers be interested in the Tesla model? If not, how will smart cars and apps (independent of OS/voice assistant) partner up? Given the hardware requirements necessary to enable native app functionality and optimal user experience, how will this force smart car manufacturers to work more seamlessly with platforms like AGL to ensure competitive advantage and differentiation? And, will this commoditize the OS dominance we see in smartphones today? It’s clearly still early days and — at least in the near term — multiple OS solutions will likely be employed until preferred solutions rise to the top. Regardless, automakers and tech companies clearly recognize the importance of “connected mobility.” Connectivity and vehicular mobility will very likely replace traditional auto values like speed, comfort and power. The combination of Wi-Fi hotspot and autonomous vehicles (let alone consumer/business choice of on-demand vehicles) will propel instant conversion/personalization of smart car environments to passenger preferences. And, while questions remain around the how and the who in this new era in mobile, it’s not hard to see the why. Americans already spend an average of 293 hours per year inside a car, and the average commute time has jumped around 20 percent since 1980. In a recent survey (conducted by Ipsos/GenPop) researchers found that in a driverless future people would spend roughly a third of the time communicating with friends and family or for business and online shopping. By 2030, it’s estimated the autonomous cars “will free up a mind-blowing 1.9 trillion minutes for passengers.” Another analysis suggested that even with just 10 percent adoption, driverless cars could account for $250 billion in driver productivity alone. Productivity in this sense extends well beyond personal entertainment and commerce and into the realm of business productivity. Use of integrated display (screen and heads-up) and voice will enable business multi-tasking from video conferencing, search, messaging, scheduling, travel booking, e-commerce and navigation. First-mover advantage goes to the mobile app companies that first bundle into a single compelling package information density, content access and mobility. An app company that can claim 10 to 15 percent of this market will be a significant player. For now, investors are throwing lots of money at possible winners in the autonomous automotive race, who, in turn, are beginning to define the shape of the mobile app landscape in a driverless future. In fact, what we’re seeing now looks a lot like the early days of smartphones with companies like Tesla, for example, applying an Apple -esque strategy for smart car versus smartphone. Will these OS/app marketplaces be dominated by a Tesla — or Google (for that matter) — and command a 30 percent revenue share from apps, or will auto manufacturers with proprietary platforms capitalize on this opportunity? Questions like these — while at the same time wondering just who the winners and losers in AV will be — mean investment and entrepreneurship in the mobile app sector is an extremely lucrative but risky gamble.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Disclaimer: Trading in bitcoins or other digital currencies carries a high level of risk and can result in the total loss of the invested capital. theonlinetech.org does not provide investment advice, but only reflects its own opinion. Please ensure that if you trade or invest in bitcoins or other digital currencies (for example, investing in cloud mining services) you fully understand the risks involved! Please also note that some external links are affiliate links.