During the height of the Kendrick Lamar–Drake beef earlier this year, disses and responses were flying thick and fast across social media. In the midst of it all, comedian and creator Will Hatcher helped make history when legendary hip-hop producer Metro Boomin sampled Hatcher’s song “BBL Drizzy” for his diss track instrumental of the same name. Everyone wanting to take shots at Drake rapped on the beat; Metro Boomin gained notoriety for, according to Billboard, becoming “the first major producer to use an AI-generated sample.”
Under his alias King Willonius, Hatcher had released “BBL Drizzy” in April, riffing on a Rick Ross post on X that had joked about Drake getting a Brazilian butt lift. The song did well on X as Ross’ diss trended, but the viral hype inevitably died down and Hatcher moved along to the next meme.
A few weeks later, Metro Boomin put the track into a beat, and encouraged his followers on social media to rap over it, promising a prize of $10,000 and a free beat to the best one. This sent “BBL Drizzy” into the stratosphere, landing Thatcher in outlets like Vulture and The Guardian, where he went on record stating that though the music and vocals were AI-generated, the lyrics were all him. “There’s no way AI could write lyrics like ‘I’m thicker than a Snicker and I got the best BBL in history,’” he told Billboard at the time.
No, AI likely can’t write “thicker than a Snicker,” but its lyric-writing capabilities lag far behind the availability and flexibility of audio-creation tools. Just last week, Google rolled out MusicFX DJ, which allows almost anyone to create new songs live, simply by typing text prompts. The company behind music generator Suno announced last Tuesday that mega-producer Timbaland, a “top user” of the tool, would be coming on as a strategic adviser.
As part of the announcement, Timbaland put the stems from his new song “Love Again” on Suno, offering aspiring producers $100,000 in cash for the best AI-assisted remix. The contest ends early next month. A group of music labels sued Suno, along with AI music-generator Udio, which Hatcher used on “BBL Drizzy,” earlier this year for copyright infringement, alleging the tools were trained on their artists’ work. The companies claim using copyrighted recordings is fair use.
All of which makes “BBL Drizzy” look like a lodestar for AI music. While some of his fellow creators may disagree with him, Hatcher is a proud advocate for AI tools. The success of “BBL Drizzy” and the popularity of the AI content on his social media channels overall show that the AI music landscape is rapidly changing, as more creators find ways to use AI not just to produce curiosities or oddities, but massive hits.
Hatcher started creating music with AI only a few months prior to “BBL Drizzy,” but he’s no novice. Based in New York, the 40-year-old Floridian has been in the content creator trenches for nearly two decades. His first viral hit came during a long-ago epoch of the internet, a 2007 Soulja Boy parody called “Crank That Homeless Man.”
“I definitely put in my 10,000 hours just making comedy music,” Hatcher says. Eventually he moved on from parody and began writing original songs, such as the 2010 sleazy electro-rap hit “I Got It at Ross.” By 2020, he was working as a club comedian and pursuing a career in television screenwriting—two career paths that hit major roadblocks due to the Covid-19 pandemic and the writers’ strike.
When Hollywood writers took to the picket lines in the spring of 2023, at least in part to defend themselves against the encroaching danger of AI, Hatcher wanted to get ahead of it. He began experimenting with some of the generative AI tools his fellow writers were worried about. ChatGPT, Sora, and Midjourney—he learned his way around the language of prompting through constant practice and experimentation, producing AI short films and movie trailers for imagined franchises. Eventually he came across Suno and Udio, which—like the text and image tools he’d been using—allowed him to generate music by inputting natural language prompts.
Song parodies and comedic music had always been his bread and butter, but he quickly found that AI tools supercharged his ability to release timely, high-quality songs, often based on top trends on X or Instagram. As the Kendrick–Drake beef moved at the speed of social media, with explosions of memes and commentary rocketing around the platforms, Hatcher was able to jump right in and participate with something a little more fully-formed than just another tweet.
“Whatever’s trending, that’s what I like to create songs about,” Hatcher says. His long-honed comedy talent helped him whip up a concept, but it was his more recently developed facility with Udio and Midjourney that let him turn around “BBL Drizzy” in, he says, about five minutes after writing the lyrics.
A driving bassline, reminiscent of the Wrecking Crew, supports a soulful vocal, and the end result is something almost indistinguishable from a classic Motown single. For unclear reasons, AI tools are particularly good at generating music that sounds like 1960s and ’70s soul and R&B—which just happens to be the preferred sample base of hip-hop producers.
Hatcher has found his audience, too, resonates most with this sound. YouTube comments on a timely track like “They’re Eating the Dogs,” which flips Donald Trump’s presidential debate quote about Haitian immigrants into a smooth, Marvin Gaye-esque jam, confirm this: “This tune feels as relevant now as it did back in the good ole days,” writes one user.
People really dig King Willonius’ tunes—unironically and wholeheartedly. They’re hardly AI slop; they’re clever (all Hatcher) and really catchy (part Hatcher, part AI). Though totally unserious, they’re a serious indication of the direction that human-AI creative collaboration is heading.
This month in Williamsburg, Brooklyn, the Mondo.NYC music business conference featured at least five panels discussing the state of generative AI in the music industry. “We’re far past the early adopter phase,” music tech consultant Cherie Hu, of Water & Music, told me about her takeaways from the conference. “Everyone in the industry is either thinking about it,” she observes, or is building consumer products with it. “[And] a lot of people actually are either creating or listening to songs made with AI. So the conversation felt more serious because it’s not just hypothetical anymore.”
Despite the seriousness of AI’s encroachment into listener and consumer habits, the music industry is taking its time to enter the race for AI adoption. While film studios partner with AI companies, and various media companies make deals to train AI models on their bodies of work, the music industry is still trying to have its cake and eat it too.
Back in June, the Recording Industry Association of America, on behalf of the Big Three major label groups (Warner Music Group, Universal Music Group, and Sony Music Entertainment) sued Suno and Udio for copyright infringement. Concurrently, Universal has a “strategic AI” partnership with BandLab, makers of a mobile DAW (digital audio workstation). BandLab agreed to abide by the principles of the Human Artistry Campaign, a global consortium organizing to ensure rights holders get a say in AI’s use of their work, when developing its toolset. Similar to the existential dilemmas Hollywood faces, labels seem to not want to be left behind by the generative AI movement, but also have a vested interest in ensuring their artists’ rights are protected and their livelihoods are not threatened.
Not all AI tools are the generate-from-scratch types like Google’s MusicFX, Suno, and Udio that independent creators like Hatcher use—there are also ones for extracting stems, for mixing and mastering, and for brainstorming lyrics, all of which are finding user bases amongst hobbyists as well as professional producers. Sam Hollander, a pop hitmaker who has worked with Panic! at the Disco and Flava Flav, compares AI to the explosion of drum machines in the ’80s, and how session drummers had to adapt and learn programming if they wanted to continue to work.
Giving a typical example of where AI fits into the workflow of him and his peers, Hollander recalls how a UK grime producer he worked with was using Suno and Udio to generate funk and soul samples; once the tool iterated one he liked, he’d use another AI tool to extract the stem in order to use it, manually, in a track.
“There’s going to be two paths,” Hollander predicts. “An entirely organic industry that bucks against it” versus “people who adapt [AI] into what they do.” Last week, thousands of musicians and other creatives aligned themselves with the former group, signing a letter claiming that AI training was an “unjust threat to the livelihoods of the people behind those works.”
For his part, Hollander dabbles in AI tools for brainstorming as well as for sample-hunting and generating, but, like Hatcher, always uses his original lyrics. “I don’t think AI does humor exceptionally well yet,” Hatcher says—human input is still needed, and even necessary, if AI-made music is going to avoid the pitfalls of being totally boring and bad.
“[AI music] either has a shock factor, or [is] music as a background thing,” Hu points out. Shock-factor comedy is part of the appeal for successful AI projects, like the viral SpongeBob rap by producer Glorb, or ObscurestVinyl, a collection of “lost” album tracks like the Ronettes-style “My Arms Are Just Fuckin’ Stuck Like This.” Original concepts and hand-crafted lyrics mean that the AI output avoids feeling generic—and make it good and interesting enough that it might be picked up, in Hatcher’s case, by a major producer as a sample on merit alone.
The other side of that coin is the realm of AI-generated ambient/chill music, which Hu identifies as a growing domain, citing YouTube channels like Home Alone and what is ? as examples. With millions of views, and their use of AI on the down-low, these channels also show that what began as experimentation in the early days of these tools—so, literally, last year—is now going mainstream in an almost hidden way, as AI output becomes indistinguishable from human-made samples and compositions.
It takes real creative skills to use AI tools creatively. Many younger creators who come to Hatcher for tips and tricks don’t have the kind of work ethic or background in non-AI-assisted creativity that he has, and might struggle to make the most of these powerful tools. But perhaps not for long.
“This is the worst that AI will ever be,” Hu says. “It’s already pretty good, but it’s only going to get better.” She cites improvements in audio fidelity and increased controllability—soon, a creator will need less practice with prompting and working with models in order to have them turn out exactly what they want to hear.
As King Willonius, Hatcher recently released an album of his AI-assisted tracks, and he plans to premiere “BBL Drizzy”: The Musical at Art Basel in Miami this year. He’s prepared to go as far as AI music will take him.
Hatcher sees the playfulness of AI tools as their best quality, and one which makes them a vital tool for aspiring creators. “They’re so new, but they work best for individuals that just want to play,” he says. “Let’s have fun and explore and see what’s possible.”
The major battles of the Kendrick vs. Drake beef took place during an extremely vital time for AI technology—Suno was released in December 2023 and Udio in early April 2024, the very month that the feud heated up with the release of Drake’s “Taylor Made Freestyle,” which, to widespread chagrin, featured (without permission) the AI-generated voices of Tupac Shakur and Snoop Dogg. That very month Hatcher released the track Metro Boomin would sample for “BBL Drizzy.” Nothing was the same.