Sun. Dec 22nd, 2024

Taylor Swift explicit AI photos are just the start of a growing threat

An AI-generated George Carlin audio special has drawn a lawsuit from his estate. Deepfaked pornographic images of Taylor Swift circulated on X, formerly Twitter, were viewed millions of times before being taken down by the social media site. YouTube terminated 90 accounts and suspended multiple advertisers for faking celebrity endorsements. These fakes have drawn fierce criticism, but they’re hardly the first celebrities to be recreated with AI technology, and they won’t be the last. And the AI problem is only going to get worse as the technology improves every day while the law drags behind.

“Computers can reproduce the image of a dead or living person,” says Daniel Gervais, a law professor at Vanderbilt University who specializes in intellectual property law. “In the case of a living (person), the question is whether this person will have rights when his or her image is used.” (Currently only nine U.S. states have laws against nonconsensual deepfake photography.)

A bipartisan group of U.S. senators has introduced legislation called the No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act of 2024 (No AI FRAUD). Supporters say the measure will combat AI deepfakes, voice clones and other harmful digital human impersonations.

The pace at which the technology advances is exponential, and society will likely have to reckon with many more hyper-realistic but fake images, video and audio. It’s already hard to tell what’s real and what’s not, making it difficult for platforms like YouTube and X to police AI as it multiplies.

“I’m very confident in saying that in the long run, it will be impossible to tell the difference between a generated image and a real one,” says James O’Brien, a computer science professor at the University of California, Berkeley. “The generated images are just going to keep getting better.”