8   +   9   =  
A password will be e-mailed to you.

YACHT (Claire L. Evans, Jona Bechtolt, Rob Kieswetter) are set to put out a new record on August 30th, which is exciting in and of itself, but even more intriguing is the fact that they used AI to help them create it – using a series of processes to “harvest” new musical compositions and lyrics from their back catalog (and other kinds of input from friends, influences, etc.), they were able to structure and perform some pretty incredible tracks. I listened to the album in full with the band at DOLBY’s Soho flagship last night, and afterwards I was able to sit down with them to talk about how this idea to incorporate AI worked, and about the ways in which they were challenged both physically and creatively to make the finished record. Internet-eavesdrop on our full conversation (which blew my fucking mind) below, and pre-order Chain Tripping here.

What was the conversation like when you decided to use this method to make the record?

Jona: There was no first conversation, because we didn’t understand what the method would be at first. It was just an overarching idea of, “Let’s us AI to make our next album.” And the three of us had no idea what that meant. At first we were like, “Oh, it’ll work like Shazam or something! Or we’ll ask Siri to do it! Or we’ll input songs and ask it to write songs for us after we push a button!” But then it ended up being three years of research and having to cobble together a very weird process to make it happen. There was no point where it was intuitive, no point where we were like, “Oh, this’ll be easy!” or “This’ll cut on time!” It was not efficient.

Claire: I think even up until the point we went into the studio we had no idea how we were going to do it.

Jona: Yeah.

So when you got the input back from it, what was the most surprising part about that?

Jona: It was tons of small surprises. When we got the input back, it was us running it through…well, first a command run, and…full disclosure, we’re not coders or programmers. We have no idea what we’re doing in this realm of stuff. [Laughs]

But that’s why I think it’s even cooler, though!

Jona: Thanks! But yeah, we’d take short clips of melodies or drum patterns, so like, two bars or sixteen bars at the longest. We’d choose from a previous song like, “Oh, we like this guitar riff and this bassline.” Then we’d run that through the model, and then it’d multidimensionally find every possible path between those two melodies, and we’d get that data. So he [Rob] and I kept doing it over and over again without listening to the output, because we just knew that when we went into the studio we’d need as many options as possible. But then we had too much at the end.

Rob: It was a good problem to have. We basically just harvested a bunch of stuff and jarred it, put it aside, then cracked it open…

Jona: Gave it a sniff, a whiff. [Laughs] The other thing that we were doing with the same two melodies, for example, is there’s this concept in AI called temperature, and if you run the model at a lower temperature, it takes less risks. In a higher temperature, it goes nuts. It like, falls off a cliff. So we ran multiple of the same pairings at different temperatures, so then we had…just so many. And this is just MIDI. It’s symbolic, it’s like sheet music. So we just had that.

Claire: But they had no structure. I mean, to be fair, these are just melodies that kind of go on and never do anything other than just one note after the next. So what we were doing was trying to find moments where like, “Oh, we can loop this and it’ll be a cool riff. Or we can pull this and that will be a guitar solo,” or something like that.

Jona: And we’re a band that’s very interested in patterns. We love patterns in all forms, both visually and in music. So it was going through and identifying which things would be interesting, and then trying to learn them was a whole other thing since the model’s not taking into consideration our three bodies. A lot of it was very difficult to play, and super frustrating, so…

Rob: Even seemingly super simple stuff on the record, we can remember back to trying to play that riff or that drum pattern in the studio and it being this frustrating process. It was interesting to find the patterns we liked, regardless of what we could accomplish physically, and then having to twist our brains to work into that new pattern.

Jona: It made us realize how much we lean on everything we’ve known before when performing music. It’s all about feeling comfortable, whether you’re holding a guitar or playing a drum fill, it’s something you’ve heard in another song that you’re just replicating or trying to put your own spin on, but you’re starting with something you already know. So when you start not there, and with something completely foreign, I think that’s where the good stuff happens.

That’s bonkers. Like, the more you’re talking about this, the more my brain is exploding a little bit. How was the lyrical process, then?

Claire: Parallel process. So we took the corpus of two and a half million words, which were all words from our own back catalog, friends, influences, music we grew up listening to, music from the geographic regions we grew up in, and gave that to this creative technologist called Ross Goodwin, who’s a super interesting artist. He trained this algorithm called a character recurrent neural network that basically just taught itself English from only those lyrics. So it doesn’t even really know that it’s doing language; it’s doing like, one letter at a time as a symbolic piece of information. So the stuff it spits out…there’s the same kind of temperature control thing where the low temperature stuff is super repetitive, super kind of punk rock and weird, like, “I NEED I NEED I NEED I NEED I WANT I WANT I WANT I WANT STAB STAB STAB STAB”, and then the high temperature stuff is really all over the place, like fake proper nouns and all these weird new words that didn’t exist before. So it was a lot about splitting the difference between those two things, and holding pieces from high temperature and low temperature and combining them. The model also came up with the titles.

Jona: [Shows a photo of a giant stack of paper] So this is one instance of the model that we printed on dot matrix printer paper.

Holy shit!

Claire: Yeah, we just brought that into the studio.

Jona: Those are all of the lyrics, plus like, five hundred more albums worth of lyrics. So Claire sat with a highlighter just picking her bigger phrases.

Claire: So I was doing with lyrics what they were doing with instrumentation, and we’d come together like, “I have this line I really love…what melody works?” But of course, in the same ways the melodies don’t think about the body, they also don’t think about the language.

Jona: Right, so she wasn’t singing melodies like, “Oh, I’m reading this on the page, this should sound like this,” because we already had a melody, so she had to make the words work.

Claire: So it was like, smashing the words so that they fit. Forcing a word that’s two syllables to be one syllable. Pronouncing things in ways I wouldn’t have pronounced them so that they worked with the melody. A lot of weird sort of gymnastics to make it work, and a lot of, “We should cut this in half and move it apart, rearrange it.” Because obviously we couldn’t add anything.

Jona: Yeah, our rules were very strict. We could subtract, we could cut anything, but we weren’t allowed to add anything in our own minds.

And what was the track ordering process like? Because the last track does feel very fitting as a stopper.

Jona: We looked at the idea of having AI mix or sequence the record, but there’s nothing that does that.

Claire: I think at the end of the day we just did that in a very lo-fi way with Post-Its on the wall, debating the flow of it. I mean, obviously (I think) that last song is a perfect ender, so that was natural. Same process as we normally would use, just listening to it in different permutations and being like, “This feels good.” 

Right. Alright, and (especially at this point) you guys are really known for embracing technology and the machine. Do you ever get scared of that?

Claire: Yeah, of course. AI especially can be scary, but that’s why I think it’s important that artists and people who are not stakeholders in evil enterprises actually get in the mix, and position themselves as part of the conversation about what AI is and how it should be used. Also, we have very few metaphors as a culture for what AI is. Like, we’re still talking about 2001: A Space Odyssey, and Her and stuff, and AI isn’t a thing – it’s not an anthropomorphized thing. It’s a series of complicated little processes that have nothing to do with a personality, really. I mean, we have this fantasy of, “Oh, artificial intelligence. That must mean a machine mind.” But that’s not what the actual thing is. So I think the more we can use art as a way of providing new metaphors for what this technology is, so we can start having more conversations about what the implications of it are, I think that’s really meaningful. At least for us, I don’t think I’d have ever understood it or how it works if we hadn’t forced ourselves to work with it. I think we’ve always learned by doing, and this is kind of the ultimate manifestation of that. 

What’s the most interesting thing you learned about yourselves in this experiment?

Claire: I think we learned what our physical and creative limitations are. I mean, I’m not a virtuosic singer, but it’s very clear when you’re trying to take something that you didn’t write, that didn’t come from your body, and then perform it in a way that feels natural…same for the drums and guitar and bass and everything; you come face to face with your own boundaries. And either you say, “Okay, that’s as far as I can go,” or you try to step over them a little bit. And that’s something that we tried to do with this, is learn unlearnable parts and push ourselves in new ways.

X
X