AI slobs have infiltrated almost every corner of the internet.
Generative AI makes it easy to create a variety of text, images, videos, and other types of materials. Because it only takes a few seconds to enter the prompt for your model of choice and produce the results, these models have become a quick and easy way to create content at scale. And 2024 was the year we started calling this (generally low-quality) media AI slop.
This low-risk way to create AI slop means it can now be found in almost every corner of the internet, from newsletters in your inbox and books for sale on Amazon, to ads and articles on the web and crude photos on social media. Feed. The more emotionally evocative these photos (wounded veterans, crying children, signs of support for the Israeli-Palestinian conflict) are, the more likely they are to be shared, resulting in higher engagement and ad revenue for experienced creators.
AI slop is not only frustrating, it raises serious questions about the future of the very models that helped produce it. Because these models are trained on data scraped from the internet, the increasing number of junk websites containing AI junk means there is a very real risk that the output and performance of the models will steadily deteriorate.
AI art is distorting our expectations of real-world events.
2024 is also the year when the effects of surreal AI images begin to seep into our real lives. Willy’s Chocolate Experience, an informal, immersive event inspired by Roald Dahl’s chocolate charlie and the chocolate factorymade headlines globally in February after fantastical AI-generated marketing material gave visitors the impression that it would be much grander than the sparsely decorated warehouse its creators had created.
Likewise, hundreds of people lined the streets of Dublin for a non-existent Halloween parade. A Pakistan-based website used AI to create a list of events taking place in the city, which was shared widely on social media ahead of October 31. The SEO bait site (myspirithalloween.com) has since been taken down, but here’s how both events played out. The public’s misplaced trust in AI-generated online material could come back to haunt us.
Grok allows you to create images for almost any scenario.
Most major AI image generators have guardrails, which are rules that dictate what AI models can and cannot do to prevent users from creating violent, graphic, illegal, and other types of harmful content. Sometimes these guardrails are just there to prevent anyone from blatantly using someone else’s intellectual property. But Grok, an assistant created by Elon Musk’s AI company xAI, ignores almost all of these principles as Musk refuses to do what he calls “awakening AI.”