The United Nations is calling on companies to act fast against deepfakes and AI-generated misinformation.
A new report from its telecom arm says the risks to elections and finance are growing fast.
The International Telecommunication Union (ITU) released the warning on Friday during its AI for Good Summit in Geneva.
The report pushes for advanced detection tools and strong content verification systems, especially on social media platforms.
“Trust in social media has dropped significantly because people don’t know what’s true and what’s fake,” said Bilel Jamoussi, an ITU official.
He called deepfakes one of the biggest challenges facing global tech.
Deepfakes use artificial intelligence to create fake but convincing videos, images, and audio. They can impersonate public figures or fabricate events entirely.
The ITU urged companies to adopt digital watermarks and content provenance systems that track who created media and when.
This would help verify authenticity before users see or share content.
Leonard Rosenthol of Adobe pinpointed the need for transparency:
“We need more of the places where users consume their content to show this information… When you are scrolling through your feeds you want to know: ‘can I trust this image, this video…'”
Adobe has worked on deepfake issues since 2019.
Other experts agreed that no single country can solve this alone.
“If we have patchworks of standards and solutions, then the harmful deepfake can be more effective,” said Dr. Farzaneh Badiei, founder of Digital Medusa.
The ITU is currently working on global standards for video watermarking, since videos now make up 80% of all internet traffic.
Tomaz Levak, founder of Umanitek, said private companies must do more than wait for regulation.
“AI will only get more powerful, faster or smarter… We’ll need to upskill people to make sure that they are not victims of the systems,” he said.
The ITU wants the deepfakes to be countered before before the damage spreads faster than the truth.