Yesterday, Facebook revealed its plan for fighting disinformation ahead of the 2020 US election. It includes spending $2 million on a media literacy project, making it easier to research political ads, and using more prominent fact-checking labels. Each step is commendable, but it all seems hypocritical coming from a company that refuses to do anything about political ads that contain false information.
The message seems to be that Facebook is very concerned with preventing falsehoods—but only when it is spread by regular users and not by the people who might be elected to positions of real power. At the same time, CEO Mark Zuckerberg was right when he said during a speech last week that “I don’t think most people want to want to live in a world where you can only post things that tech companies judge to be 100% true.”
But there’s a middle ground between Facebook deciding what everyone is allowed to see and letting politicians lie as they wish. Facebook should revisit its policy of not touching political content and instead put one of those new, prominent labels on top of political ads that contain false information (like the Trump campaign ad that lied about Joe Biden, or the fake Facebook ad that Elizabeth Warren bought to goad Zuckerberg). That way, the company can keep the ads up without letting falsehoods spread unnoticed, which is especially important because political ads are often micro-targeted at communities that might be most likely to believe them.
To be clear, Facebook’s third-party fact-checking program has not been a panacea for the problem of disinformation. An enormous amount of content is posted every day, far too much for everything to be fact-checked. There are people who won’t trust the fact-checkers, and so a label is meaningless to them.