ARTICLE AD
Flo Crivello was monitoring outputs from the AI assistants his company Lindy makes when he noticed something strange. A new client had asked her Lindy AI assistant for a video tutorial that would better help her understand how to use the platform, and the Lindy responded in kind – that’s when Crivello knew something was wrong. There is no video tutorial.
“We saw this and we were like, ‘Ok, what kind of video did it send?’ and then we were like, ‘Oh snap, this is a problem,’” Crivello told TechCrunch.
The video the AI sent the client was the music video to Rick Astley’s 1987 dance-pop hit “Never Gonna Give You Up.” In more familiar terms: the client got rickrolled. By an AI.
A customer reached out asking for video tutorials.
We obviously have a Lindy handling this, and I was delighted to see that she sent a video.
But then I remembered we don't have a video tutorial and realized Lindy is literally fucking rickrolling our customers. pic.twitter.com/zsvGp4NsGz
Rickrolling is a bait-and-switch meme that’s over fifteen years old. In one incident that popularized the meme, Rockstar Games released the much hyped “Grand Theft Auto IV” trailer on its website, but traffic was so immense that the site crashed. Some people had managed to download and post the video onto other sites like YouTube, sharing the links so that people could see the trailer. But one 4chan user decided to play a prank and share the link to Rick Astley’s “Never Gonna Give You Up.” Seventeen years later, people are still pranking their friends by sharing the Astley song at inopportune moments – now, the music video has over 1.5 billion views on YouTube.
This internet prank is so ubiquitous that inevitably, large language models like ChatGPT, which powers Lindy, picked up on it.
“The way these models work is they try to predict the most likely next sequence of text,” Crivello said. “So it starts like, ‘Oh, I’m going to send you a video!’ So what’s most likely after that? YouTube.com. And then what’s most likely after that?”
Crivello told TechCrunch that out of millions of responses, Lindy only rickrolled customers twice. Still, the error was necessary to patch.
“The really remarkable thing about this new age of AI is, to patch it, all I had to do was add a line for what we call the system prompt — which is the prompt that’s included in every Lindy — and it’s like, don’t rickroll people,” he said.
Lindy’s lapse calls into question just how much of internet culture will be subsumed into AI models, since these models are often trained on large swaths of the web. Lindy’s accidental rickroll is particularly remarkable because the AI organically reproduced this very specific user behavior, which informed its hallucination. But traces of internet humor seep into AI in other ways, which Google learned the hard way when it licensed Reddit data to train its AI. As a hub of user-generated content – much of which is satirical – Google’s AI ended up telling a user you can make cheese better stick to pizza dough by adding glue.
“In the Google case, it wasn’t exactly making stuff up,” Crivello said. “It was based on content – it’s just that the content was bad.”
As LLMs rapidly improve, Crivello thinks that we won’t see as many gaffes like this in the future. Plus, Crivello says it’s easier than ever to patch these mishaps. In the early days of Lindy, if one of its AI assistants couldn’t complete the task the user asked, the AI would say it’s working on it, but never deliver the product. (Oddly enough, that sounds pretty human.)
“It was really hard for us to patch that issue,” Crivello said. “But when GPT-4 came out, we just added a prompt that was like, ‘If the user asks you to do something you’re not able to do, just tell them you can’t do it.’ And that fixed it.”
For now, the good news is that the customer who got rickrolled might not even know it.
“I don’t even know that the customer saw it,” he said. “We followed up immediately like, ‘Oh hey, this is the right link to the video,’ and the customer didn’t say anything about the first link.”