The firm’s emerging policy on deepfakes seems to be to leave them up, but flag that they are fake.
The news: A deepfake video of Mark Zuckerberg was uploaded to Instagram, which Facebook owns, earlier this week. In the clip, Zuckerberg talks about being “one man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.” It was created using a two-year-old clip of Zuckerberg talking about Russian interference in the 2016 US election. You can watch it here.
Who made it? It was created by artists Bill Posters and Daniel Howe in partnership with a UK-based advertising company called Canny. It’s one of several made by the group as part of Spectre, an exhibition that took place at the Sheffield Documentary Festival this week.
Real Life. Real News. Real Voices
Help us tell more of the stories that matterBecome a founding member
An emerging policy: Last month, Facebook decided not to remove a video of Nancy Pelosi which had been doctored to make her seem drunk (though it was not made using deep learning techniques, so technically is not a deepfake.) Instead, Facebook said that if third-party factcheckers found it to be fake, it would add a few disclaimers informing users as such, and rank it further down on people’s news feeds. This seems to be its new rule: leave manipulated videos up, but make it clear they are not real.
A growing problem: The speed and ease with which anyone can now make and spread a deepfake has alarming implications, and policymakers are increasingly realizing wising up to the issue. Tomorrow the US will hold its first ever hearing on deepfakes, and house representative Yvette Clarke will introduce a bill which attempts to clamp down on them.
Sign up here to our daily newsletter The Download to get your dose of the latest must-read news from the world of emerging tech.
Subscribe to the newsletter news
We hate SPAM and promise to keep your email address safe