"Every day is April Fool's Day now, requiring a low but constant effort," argues Motherboard's senior editor, in a post shared by Slashdot reader samleecole. "As AI-generated shitposting becomes easier, it's inevitable that one of these will catch you with your guard down, or appeal to some basic emotion you are too eager to believe..." Even if you're trained in recognizing fake imagery and can immediately spot the difference between copy written by a language model and a human (content that's increasingly sneaking into online articles), doing endless fact-checking and performing countless micro-decisions about reality and fraud is mentally draining. Every year, our brains are tasked with processing five percent more information per day than the last. Add to this cognitive load a constant, background-level effort to decide whether that data is a lie. The disinformation apocalypse is already here, but not in the form of the Russian "dezinformatsiya" we feared. Wading through what's real and fake online has never been harder, not because each individual deepfake is impossible to distinguish from reality, but because the volume of low effort gags is outpacing our ability to process them.... Hany Farid, a professor at the University of California, Berkeley who's been studying manipulated media since long before deepfakes, told me that while he's used to getting a few calls every week from reporters asking him to take a look at images or videos that seem manipulated, over the past few weeks, he's gotten dozens of requests a day. "I don't even know how to put words to it. It really feels like it's unraveling," Farid told me in a phone call. When AI generated fakes started cropping up online years ago, he recalled, he warned that this would change the future, and some of his colleagues told him that he was overreacting. "The one thing that has surprised me is that it has gone much, much faster than I expected," he said. "I always thought, I agree that it is not the biggest problem today. But what's that Wayne Gretzky line? Don't skate to where the puck is, skate to where the puck is going. You've got to follow the puck. In this case, I don't think this was hard to predict." Buzzfeed noted that a viral image of the Pope in a white "puffer" coat" was created by a 31-year-old construction worker who created it while tripping on mushrooms, then posted it to Facebook. But Motherboard's article concludes with a quote from Peter Eckersley, the chief computer scientist for the Electronic Frontier Foundation who died in 2022. "There's a large and growing fraction of machine learning and AI researchers who are worried about the societal implications of their work on many fronts, but are also excited for the enormous potential for good that this technology processes." Eckersley said in a 2018 phone call. "So I think a lot of people are starting to ask, 'How do we do this the right way?' "It turns out that that's a very hard question to answer. Or maybe a hard question to answer correctly... How do we put our thumbs on the scale to ensure machine learning is producing a saner and more stable world, rather than one where more systems can be broken into more quickly?"

Read more of this story at Slashdot.