AI co-produced Taryn Southern’s new album


♪ The hopefuls, the dreamers ♪ ♪ Born from zeros and ones ♪ – [Dani] Taryn Southern
is an online personality who you might know from
her YouTube channel or when she was a
contestant on American Idol. These days, Taryn is
interested in emerging tech, which has led to her current project, recording a pop album. ♪ I’m breaking ♪ – These two things
might not sound related, but her album has a twist. Instead of writing all the songs herself, Taryn used artificial intelligence to help generate percussion,
melodies, and chords. This makes it one of the
first albums of its kind. A collaboration of sorts
between AI and human. Music making AI software
has come a long way in the past few years to the point where it can
co-produce an album like Taryn’s. As a musician and producer, the idea of AI being able
to do what I do is freaky. I met up with Taryn to
find out about the process of collaborating with software. Maybe it’s not as crazy as it sounds. Do you view AI when you’re
working with these platforms as a tool or a collaborator? – I’ve been using the word tool a lot just in talking with you, but I do view it more as a collaborator in that it is giving me
source inspiration material. So a piano doesn’t just give me its notes. And I would think more of piano as a tool. A tool is something we can
wield and a collaborator is– – Something you work with.
– Something we work with. – So yes, I would say AI feels much more like a collaborator. Collaborator and a tool ’cause I can also still
tell it what to do. – You have power over it still, for now. – Yeah.
(laughter) – Taryn uses several different AI programs to write her music, including software from
IBM, Google, and Amper. Most of these systems work
using deep learning networks, a type of AI that’s reliant on analyzing large amount of data. Basically, you feed the software
tons of source material, from dance hits to disco classics, which it then analyzes to find patterns. It picks up on things like
chords, length, tempo, and how notes relate to one another, learning from all of this input so it can create its own melodies. How has it affected your song writing? – Well, for one, I have a
new language around music that I didn’t have before
because I’m not a musician. I know very, very little
about music theory. so I understand minor
chords and major chords and I can plunk out a
few keys on the piano, but my musical knowledge
really ends there. And now, using AI, I’m writing my lyrics and vocal
melodies to the actual music and using that as the
source of inspiration. – So what are the key differences between the different platforms
that you’ve been using? – The key differences are usability. Watson and Magenta,
you’ve gotta go on GitHub and sort of unpack the developer language, and I had to definitely
brush up on my skills with the help of some of the
engineers on these teams. So I think that that’s a potential barrier to entry with some of these tools is just that it does require
some coding knowledge. Amper is, I think, the easiest. It’s front-facing. The interface is super
simple and intuitive. – [Dani] How did you find Amper? – Amper was the first one I found because when I went online to search what tools are out there, I knew of Watson, but, at the time, they hadn’t released a open
source or public-facing software so I searched to see
what else was available and the first article that
came up about AI music focused on Amper. So I went to their website
and it was super easy to use. – Most AI programs kick out MIDI and MIDI is sort of like sheet music in that it’s instructions for
how a melody should be played. It’s not audio, it’s a protocol. Amper builds tracks
from prerecorded samples and spits out actual audio, not MIDI data, meaning there’s something
to listen to right away. From there, you can change the tempo, the key, or swap out instruments. So you can start with
something played in one style and change out the set of instruments for a completely different sound. This audio can then be exported as a whole or as individual layers of instruments, which are known as stems. Stems can then be modified further within a digital audio workstation. So there are a couple of other AI music making platforms
that are out there, but what differentiates Amper? – For us, we were like, we’ve always focused on
speed, quality, and control. And control is a huge element,
especially as an artist. What do you wanna manipulate? We’re one of the few that
you can manipulate, you know, tempo, key, and instrumentation. You’re like, “I don’t like this piano, “I’d rather have a guitar do that.” Or, “I want this other
piano in place of that.” So it’s a lot more of you working with it and then creating the
final product from there. – What’s the process then to
get those sounds into Amper? – We own all of our own audio content. We sample all of our own
instruments, note by note, because we want artists to
be able to manipulate that. So I have to record a guitar, every note, every possible thing it can do so that we can recreate
a performance from that versus having a loop ’cause we don’t use loops
in anything whatsoever. Everything is note to note. – [Dani] Taryn’s album
doesn’t just rely on artificial intelligence. She also works with other humans including her producer, Ethan. They invited me to one of
their recording sessions for her song, New World, so I could see how producers
work with AI in the studio. – I like that with the AI material that you are given new ideas that you wouldn’t come
up with on your own, but that you still have the
freedom to shape those ideas into something that makes sense to you so there’s still creative
expression involved and the end result still
feels like something that represents me and
Taryn, and so I like that. – [Dani] To get a sense
of the difference between where an AI song starts
and the finished product, here’s an early Amper export. (“Breaking Free” by Taryn Southern) And here is the final
arrangement by Taryn. ♪ I’m learning how to break free ♪ ♪ I’m breaking ♪ ♪ I’m breaking free ♪ ♪ I’m breaking ♪ – A lot of times, when other musicians come in that have done a demo on a guitar, they come in, they lay
down that guitar track, and then we talk about what
do we wanna build upon that. Whereas, in this case, Taryn is coming in with
her guitar is the AI so she presents that
and then we talk about, well, what do we see this turning into and then we can add elements
around that and restructure it. So it’s still similar to the more traditional
sense of artists coming in. – In many ways, music is the highest form of
expression that humanity has. It’s like our last–
– Our last bastion. Yes, our last bastion
and I understand that. It does force change upon
people in some form or another, and maybe some of that will be bad. Like I said, I can’t predict the future. I do think it will break people
out of their comfort zones and potentially result
in new forms of music, which could be seen as negative
for other forms of music. Did the rise of hip-hop
and EDM take away from pop? – It changed it.
– It changed it. – And now we infuse EDM and
hip-hop into top 40 pop, right? And so I think we’ll
see something similar. (rhythmic instrumental music) – This video is presented Aloft
Hotels, different by design. If you enjoyed this, please subscribe to our YouTube channel.

62 thoughts on “AI co-produced Taryn Southern’s new album

  1. I like the tech, but the music sucks.It's bland and boring like most pop music. Maybe it's only this song tho.

  2. Have always been interested in this, good to know tech advancements are making artist more keen of their capabilities.

  3. its a good concept in theory but if your feeding it other peoples music it feels as if itll be no so orginial and quite heavily influenced. What should be done is feeding it the persons own music to help them better understand the person and delve deeper into the person

  4. Her vocals has an ‘autotune’ quality (not in a good way) to it. Do other people hear that? I have not listened to more than what was presented in this video.

  5. Melodyne can make a singer out of anyone but the few song snippets in this video sounds cheesier than Disney songs. 😋 I'll still check out some more of Ms. Southern's work.

  6. After inputting all that source material of pop music, would the results of this program be considered a "derivative work"?

  7. She had a video with Siraj Raval(youtuber focused on Deep Learning, Udacity, teaching etc) earlier. Guy is too hyper and surface-level for my tastes(sentdex and DanVonBoxel are wayyyy better) but the info about tools and updates he gives is great.

  8. Nope. The day all of a composition (including vocals) is created and performed by computers is the day music and the emotion and soul that humans (and their faults) inject into it, dies. Music without the human factor is like steel versus wood: one is technically superior yet cold while the other is natural and exudes warmth.

  9. OK, I am an unabashed music composer and have read through all the comments. I get the range of emotions and we are all free to express them. Just don't forget recent music history like: Musique concrète, (French: “concrete music”), experimental technique of musical composition using recorded sounds as raw material. The technique was developed about 1948 by the French composer Pierre Schaeffer and his associates at the Studio d'Essai (“Experimental Studio”) of the French radio system. And the infamous John Cage's musical composition called 4'33." In 5 years we may be composing, producing, and recording music from our watches.

  10. Here we have a meek musician attempting to ride the AI wave in a very underwhelming way. Experimenting with AI in production has explosive potential, but publishing garbage like this in such a weak, early and unimaginative state is just sad. Shame on you verge and dani; you're both hacks.

  11. Is there any kind of fair use laws or copyright issues using music (input) that is already recorded to influence the neural network outcome of the AI music? I feel like this music should be licensed to be input for the neural network to work on. I think we have a lawsuit here.

  12. The problem of using past data to do more music is that you create a self fulfilling prophecy kind of music…. disruptive changes in style will rarely emerge from AI coded like that

  13. I think its really cool way to produce music and its good. But, I don't think I prefer this music over just piano pieces. 🙁

  14. Wait till it doesn't pay the artists.. then it will be obvious or even more obvious how this is all about the bottom line and not the art. computer generated art etc. Nothing inspiring about this.

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © 2019 Explore Mellieha. All rights reserved.