Sunday, January 13, 2013

Why The Next Big DJ Will Be An Algorithm


Iamus is one of those rare musicians who’s attained global stardom without the outsized ego or sense of entitlement that often goes hand-in-hand with fame.


Iamus is also one of those rare musicians who is an algorithm.


The computer-composer, created by a research team led by Francisco Vico at Spain’s University of Malaga, has released an album, composed symphonies performed by the London Symphony Orchestra and “written” over 1 billion songs in every genre imaginable. (Vico says rock and funk were among the most difficult styles for Iamus to master).



The machine's music is all but indistinguishable from the melodies created by humans. The Guardian held a musical Turing test that asked its audience to choose the clip, from a list of five, that had been composed by a computer. Twenty-five percent of readers correctly identified Iamus’ "Hello World" as the algorithm’s. But 30 percent picked a Gustav Mahler piece.


Iamus is staking out new ground for machines by proving they can reach a higher plane of artificial intelligence: artificial creativity.


“Sooner or later, computers will be doing art in every sense, not just music,” said Vico.


Iamus’s algorithm draws on principles of Darwinian evolution and biodiversity. As Nature’s Philip Ball explains, “The computer generates very simple ‘musical genomes’ -- little motifs that are evolved, mutated and elaborated until they acquire genuine musical content and interest.”


Vico is now using that technology to seed a new startup, Melomics Media, that will sell royalty-free versions of Iamus’ over 1 billion songs online. Music will retail for $.99 per kilobit, and buyers will receive all rights to any song they buy. Vico predicts producers for movies or commercials will score their work with Iamus’ compositions in the future.


“We’re offering music as a commodity,” he said.


For our “Life As” series, we asked Vico about the subliminal advertising he predicts will be coming soon to our favorite songs; how music will mimic our emotions; and why artificial creativity may be a boon for human ingenuity.


You mentioned that you were able to reverse-engineer the Nokia cellphone ring and then mutate that musical “genome” to create 1 million different variations of that tune. Why? How does this fit with your ambitions for Melomics?

This opens up a very interesting way of advertising: it’s giving an ad without a user noticing she’s getting one.


Imagine you’re playing music, and at some point in the song, you hear something somewhat familiar. It’s not the Nokia tune, but it’s close enough to elicit this concept in your mind, and it’s subconsciously representing the Nokia brand in your brain.


Say you have an earworm [a piece of music that you can’t get out of your head] for a brand or product and you promote it in a song. Mutations of that tune could be played everywhere, in different songs, and you’d be getting advertisement even when you didn’t realize it. And you don’t mind, because it’s not really disturbing you. It’s not stopping the music to play, say, the Coca-Cola jingle.


What other new abilities do you plan to give Iamus?

We are working on creating empathetic music that adapts to you and the evolution of your physiological states. The music player, whatever the device is, will learn from your current situation to know what you need to hear.


For example, say you’re in bed but you can’t fall asleep. The program that is running on, say, your smartphone could know your current state -- whether you’re agitated or relaxed -- and the music could evolve according to that, changing the volume, tempo and instruments. It’s exactly as if you had a violinist looking at you and trying to play music to help you fall asleep.


We have the possibility of generating small fragments of music, then playing these fragments in a certain order that is defined by an algorithm.


What are Iamus’ current limitations? How do you hope to improve a computer's ability to compose music?

I think there is a huge bias against computer composers. But putting aside that bias, many people still do think that the music [made by computers] seems to go nowhere. It’s empty music: it’s enjoyable, but in principle there’s no message behind it because the computer did not mean anything with the music. The computer did note code into the music an intention like, “try to evoke this feeling in the listener at this point, then at this other point, you’ll change to this other feeling.”


In the future we could add that layer of feelings, of intentionality, and introducing intentionality into music is something we plan to do over the next few years with Iamus. This will be very, very easy compared to what we have already done.


Another question to ask is, what’s the use of that? Why should computers have feelings? This I don’t know.


There were some experts in artificial intelligence and cognition who considered chess a creative, intellectual endeavor on par with music and literature -- until Deep Blue beat chess master Garry Kasparov. “My God, I used to think chess required thought,” author and cognitive scientist Douglas Hofstadter said after Deep Blue’s victory. “Now I realize it doesn’t.” How will Iamus’ achievements change how we think about music? And creativity?


We know very little about music in the grand scheme of things -- just what humans have been able to do in the last few centuries, no more than that. With this tool that we have now, we could really explore the music space much faster and more deeply.


In the case of music, technology will help us discover new genres, new instruments, new structures, new ways of playing, new ways of experiencing music and, of course, it will greatly affect the music industry. Imagine you have a genome in front of you that represents a rock song, and another one that represents flamenco music. You can create an entirely new genre by combining the genres you already have. This is a very powerful tool that will speed up the development of music.


What will be the most significant change in music we see five years from now?

The arrival of computers will democratize music: everyone will be able to produce music, just like everyone is able to take wonderful photographs.


When people got cheap digital cameras, they started taking pictures of everything. Now, when you go to Flickr, you can see professional-quality pictures that weren’t taken by professional photographers. Music right now is completely different from photos because you can’t just capture something you seen in the world and then upload it. Soon, however, computers will be able to create this musical world that we can take a "picture" of somehow.


The main contribution of artificial intelligence to the music industry will be that anybody will be able to pick up a song and either leave it as it is -- they’ll be the “musicians” who found this incredible song -- or they’ll be able to slightly adapt the raw material, with very simple tools, into something very beautiful.


It will be disruptive because there will be many more musicians. Anybody that has some musical sensibility and ear will be able to produce wonderful music, even if he or she doesn’t have any knowledge of music composition.


Your computer can compose a song in seconds. So when will Melomics have its first Lady Gaga?

Hopefully never. But I predict that this year, you’ll be able to download pieces from Melomics that can be taken directly to a discothèque, and people will think the songs were made by a human DJ.



This interview has been edited and condensed for clarity.


No comments:

Post a Comment