I need you more than want you, and I want you for all time
Friedrich Nietzsche famously said, “without music, life would be a mistake.” He was also a huge fan of opium and self-prescribed chloral hydrate, so maybe we should be wary of what he constitutes a “mistake.”
Either way, music is an essential, very human form of expression, bringing joy in a way that can’t be felt through words alone.
A beautiful song like “Wichita Lineman” by the recently departed Glen Campbell connects to virtually anyone on an emotional level - and yet conjures unique feelings in every listener.
For an activity we’ve been doing since people could bang one thing against another thing, the whole “humans making music” process seems to be working out just fine - so why are people trying to change it, and how?
In the first of a two-part look into the future of music consumption and creation, MONTAG asks these questions - and finds out how you can take advantage of AI to make music for yourself. Then, next week, we’ll find out how we shouldn't be scared of a future where our favourite music is made without human input...
C.R.E.A.M.**The TL;DR to the question** “why are we headed for a future where my music is made by AI?” hovers somewhere between “lust for cash” and “the human desperation to innovate.”
The way tech will change music can broadly be cleaved into two paradigms: music that will be made without any human input whatsoever, and music that is made by humans - but in a way which means handing off work to bots.
Neither of these options will fill musicians with anything other than existential dread. But it also might work out a lot better than they’d assume.
Automatic For The People**Music is, by definition, compiled using a limited number of notes, chords and melodies**, and thus is ripe for automation. It’s made of the kind of patterns that computers find simple to analyse and replicate.
So AI-produced music will suck, right?
The short answer is no. The longer answer is also no, and - surprise! - you’re already listening to it. And it’s great.
Brian Eno is considered one of modern music’s wizards. A founding member of Roxy Music, he soon quit the band to invent his own type of music: Ambient - the warm, languid, slow music that is “as ignorable as it is interesting.”
It’s the type of music you could hear at airports, as the title of one of his pioneering LPs, Music For Airports, is at pains to point out.
Eno has been producing music that makes itself for decades. Generative music involves presenting a computer with a set of sounds and some loose parameters - and letting it create the music it concludes works best.
Recently, he released Reflection, an album that was released as an app that created the music anew each time it was launched: whenever you played it, it felt sonically familiar without actually being the same.
(And even if you listen to it on a streaming service, you will experience an element of its mutation: every few months Eno quietly uploads a different version of Reflection to a slightly confused - or impassive - audience.)
Reflection is a great album that challenges what an album - and music itself - is. In some ways this is nothing new - before recorded music existed, a song was always different every time you heard it.
But that’s Brian Eno. He produced a bunch of Bowie, U2, and Coldplay albums, and is considered a genius. What about the bedroom artists, or the rest of us music lovers?
Music that makes itself is not a threat to music. It might be a threat to the livelihood of the people that make it, but that’s the same issue that we’ll all be facing soon.
Instead, what are the ways music makers will be liberated, supercharged and energised by automation?
Rewind to 1989. It’s a sunny day in Los Angeles, and you’re on the roof of the Capitol Records building. There’s a weird new music playing that sounds like someone took little bits from a hundred classic soul and funk and rock and hip hop records and jigsawed them all together.
You’d be right to think that, ‘cos you’re drunk and you're at at the launch party of the Beastie Boys’ revolutionary Paul’s Boutique LP: an album that was, indeed, made from all those bits of records (and more). Here’s the interesting part: a record like Paul’s Boutique will never be made again.
The reason that it's one-of-a-kind will frustrate anyone who’s listened to the album and been struck by the dazzling scope, audacity (who’d have the guts to shuffle a collage of bits of Beatles songs into a new song?) and funkiness of the ultimate cut-n-paste record.
It’s because the band and their visionary producers, the Dust Brothers, broke the law. They grabbed all the best bits of all the records that they liked the best and, out of the parts, made one that was better. And they didn’t pay for any of these parts.
You can’t do this any more: copyright laws in the music industry have been tightened with industrial-strength monkey wrenches. Using a snippet of another song in your own costs so much money that it rarely makes financial sense.
In fact, it’s often financial insanity: the famous strings in The Verve’s Bittersweet Symphony is a sample of an orchestral cover of a Rolling Stones song, and as a result, the Verve had to pay every penny the song earned to Mick ’n’ Keef.
What Can You Do For Me?
Wait, what’s this got to do with automation? Two very important things.
Firstly, Paul’s Boutique was a turning point: when the musician openly evolved from being a writer of music to being a curator of sounds, noises and snippets.
Everything is a remix now, and it probably has Nicki Minaj doing a guest verse.
Secondly, and conversely: this cut ’n’ paste method of making music is normal now.
Open Garageband or any easy-to-use music-making software and you’ll see that making music involves nudging around virtual lego bricks: this drum beat here, this horn stab there, and this loop of a jazz-flute unaccountably gasping over the top of it all.
It’s long been common for composers to buy “packs” of samples, noises and loops of sound made for you to cut up, move around and make new songs from.
So what if a computer made them for you instead?
It takes half a dozen clicks to create a brand-new, never-before-heard song on Jukedeck, a service that uses AI to create brand new compositions.
Jukedeck is simple: fill in a few meaningful parameters (Choose “Corporate Tech” if you must) and make some tonal distinctions - and out pops a song that you can stream, download, or buy outright so that you are the actual owner of the composition.
This is great for podcasters who just want a piece of catchy music for their show, and can’t afford to hire a songwriter, or make it themselves.
MONTAG’s companion podcast, the MONTAGE, uses a “Cinematic Sci-Fi” song that we made on Jukedeck called Reckless Doubts (a suspiciously fitting name for any of MONTAG’s activities).
Is This It?
Ho-hum, you might think: this isn’t real music. And you know - maybe it isn't.
Yet this is the very crux of tomorrow's human music-making paradigm.
Here's where the much-maligned human producer is creatively supercharged: why not use Jukedeck, or tomorrow’s more advanced version of it, to make 20 tracks, sample the best bits, and make something human, unique and utterly new from the fragments?
You might be making Paul’s Boutique 2.0 and it’d be the most “now”, most cutting-edge thing you could possibly do in music (at least, that’s what your breathless press release will say.)
So: automated music creation is not to be feared, it's not upsetting the apple cart as much as you'd think, and humans are going to be more essential than ever.
And next week, we find out why that's all wrong, and how you're already humming along to songs made by machines.