As more people, including minors, become victims of deepfake pornography and the industry that’s growing out of it, state lawmakers are pursuing legislation to deter the unauthorized creation and dissemination of digitally altered images.
Four organizations won a FTC contest for their tools that help tell real audio clips from deepfakes. The winners' approaches illuminate challenges AI audio deepfakes pose.
As AI-generated deepfakes are being used to spread false information in elections in the U.S. and around the world, policymakers, tech platforms and governments are trying to catch up.
The Google-owned video platform says it will shut accounts if they don't disclose when they use AI tools to make realistic-looking content. Other platforms are adopting similar policies.
The unleashing of powerful, generative AI on the public is raising concerns that as the technology becomes more prevalent, it will become easier to claim that anything is fake.
Powerful artificial intelligence tools that can create video, audio, text and pictures are raising fears the technology will supercharge disinformation and propaganda by bad actors.