- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
YouTube’s rollout of AI tools shows nonsensical AI-generated audience engagement and AI slop thumbnails.
YouTube is AI-generating replies for creators on its platform so they could more easily and quickly respond to comments on their videos, but it appears that these AI-generated replies can be misleading, nonsensical, or weirdly intimate.
YouTube announced that it would start rolling out “editable AI-enhanced reply suggestions” in September, but thanks to a new video uploaded by Clint Basinger, the man behind the popular LazyGameReviews channel, we can now see how they actually work in the wild. For years, YouTube has experimented with auto-generated suggested replies to comments that work much like the suggested replies you might have seen in your Gmail, allowing you to click on one of three suggested responses like “Thanks!” or “I’m on it,” which might be relevant, instead of typing out the response yourself. “Editable AI-enhanced reply suggestions” on YouTube work similarly, but instead of short, simple replies, they offer longer, more involved answers that are “reflective of your unique style and tone.” According Basinger’s video demoing the feature, it does appear the AI-generated replies are trained on his own comments, at times replicating previous comments he made word for word, but many of the suggested replies are strangely personal, wrong, or just plain weird.
For example, last week Basinger posted a short video about a Duke Nukem-branded G Fuel energy drink that comes in powder that needs to be mixed with water. In the video, Basinger makes himself a serving of the drink but can’t find the scoop he’s supposed to use to measure out the formula.
“I wouldn’t be surprised if the scoop was buried in the powder,” one YouTube user commented on the Duke Nukem G Fuel video, which certainly sounds right to me as someone who’s been serving up baby formula for the last year.
YouTube’s AI suggested that Basinger reply to that comment by saying: “It’s not lost, they just haven’t released the scoop yet. It’s coming soon.”
Image: LGR Blerbs/YouTube
I can see how that comment could make sense in the context of the types of other videos LGR publishes, which usually review old games, gadgets, and other tech, but is obviously wrong in this instance.
Another suggested reply to that same comment said: “I’ll have to check if they’re using a proprietary blend that requires a special scoop.”
“My creativity and craft stems completely from my own brain, and handing that off to some machine learning thing that mimics my style not only takes away from the enjoyment of it all for me, but it feels supremely disingenuous,” Basinger told me in an email. “The automated comments in particular come across as tone deaf, since a huge reason YouTube makes sense at all is the communication and relationship between audience and creator. I’ve had dozens of people say that they now second-guess every interaction with YouTubers in the comments since it could easily be a bot, a fake response.”
Another commenter on the Duke Nukem G Fuel video joked that Basinger should have had a tighter grip on the lid as he was shaking the formula to prevent it from flying all over the place.
YouTube’s generative content features in 2024
Basinger bursts out laughing as he reads YouTube’s suggested AI-generated reply to that comment: “I’ve got a whole video on lid safety coming soon, so you don’t have to worry!”
At other times, the AI-suggested replies are just nonsensical. The Duke Nukem G Fuel review wasn’t posted to the main LGR channel, but a channel called LGR Blerbs, which is his naming convention for shorter, less deeply researched videos about whatever he’s interested in. A few commenters said they were happy he was posting to the Blerbs channel again, with one saying “Nice. Back to the blerbs.”
YouTube’s AI suggested Basinger reply to that comment by saying: “It’s a whole new kind of blerp,” which I suppose is funny, but also doesn’t mean anything.
The weirdest examples of AI-generated replies in the video in my opinion are those that attempt to speak to Basinger’s personal life. In response to another commenter who said they were happy Basinger was posting to the Blerbs channel again, YouTube’s AI suggested the following reply: “Yeah, I’m a little burnt out on the super-high-tech stuff so it was refreshing to work on something a little simpler 🙂.” Another AI-generated reply thanked commenters for their patience and said that Basinger was taking a break but was back to making videos now.
YouTuber burnout is a well established problem among YouTube creators, to the point where YouTube itself offers tips on how to avoid it. The job is taxing not only because churning out a lot of videos helps them get picked up by YouTube’s recommendation algorithm, comments on those videos and replies to comments helps increase engagement and visibility for those videos.
YouTube rewarding that type of engagement incentivises the busywork of creators replying to comments, which predictably resulted in an entire practice and set of tools that allow creators to plug their channels to a variety of AIthat will automatically reply to comments for them. YouTube’s AI-enhanced reply suggestions feature just brings that practice of manufactured engagement in-house.
Clearly, Google’s decision to brand the feature as editable AI-enhanced reply suggestions means that it’s not expecting creators to use them as-is. Its announcement calls them “a helpful starting point that you can easily customize to craft your reply to comments.” However, judging by what they look like at the moment, many of the AI-generated replies are too wrong or misleading to be salvageable, which once again shows the limitations of generative AI’s capabilities despite its rapid deployment by the biggest tech companies in the world.
“I would not consider using this feature myself, now or in the future,” Basinger told me. “And I’d especially not use it without disclosing the fact first, which goes for any use of AI or generative content at all in my process. I’d really prefer that YouTube not allow these types of automated replies at all unless there is a flag of some kind beside the comment saying ‘This creator reply was generated by machine learning’ or something like that.”
The feature rollout is also a worrying sign that YouTube could see a rapid descent towards AI-sloppyfication of the type we’ve been documenting on Facebook.
In addition to demoing the AI-enhanced reply suggestion feature, Basinger is also one of the few YouTube creators who now has access to the new YouTube Studio “Inspiration” tab, which YouTube also announced in September. YouTube says this tab is supposed to help creators “curate suggestions that you can mold into fully-fledged projects – all while refining those generated ideas, titles, thumbnails and outlines to match your style.”
Basinger shows how he can write a prompt that immediately AI-generates an idea for a video, including an outline and a thumbnail. The issue in this case is that Basinger’s channel is all about reviewing real, older technology, and the AI will outline videos for products that don’t exist, like a Windows 95 virtual reality headset. Also, the suggested AI-generated thumbnails have all the issues we’ve seen in other AI image generators, like clear misspelling of simple words.
Image: LGR Blerbs/YouTube
“If you’re really having that much trouble coming up with a video idea, maybe making videos isn’t the thing for you,” Basinger said.
Google did not respond to a request for comment.
I think ole Ted wasn’t wrong in some ways