Editorial: We need transparency from social media platforms
Published 11:04 am Saturday, January 25, 2025
Meta CEO Mark Zuckerberg has taken a lot of heat since he announced last week that he is pulling his company out of the fact-checking business and curtailing content moderation on its platforms. The criticism is understandable, given the uncertainty over how Meta’s new rules will handle misinformation and otherwise harmful material.
Keep in mind, however, that the company’s content-moderation strategies — and indeed those of practically all social media platforms — have not worked as intended. As Zuckerberg noted in a video about the changes, Meta’s automated content screening often got things wrong. And even when it correctly identified “misinformation” — a nebulous term that’s far more difficult to define than many people want to admit — it struggled to remove the stuff, given the volume and persistence of bad actors.
In any case, the problems that social media poses for its users run much deeper than content moderation. Bigger concerns stem from how platforms disseminate content. Tech companies should be helping address these worries by doing far more to reveal their algorithms to the public, allowing for greater scrutiny of their operations. The companies should also grant access to their data so that researchers and policymakers alike can study the effects that social media networks have on users and society.
The need for such transparency has become more evident since Zuckerberg’s announcement. Rather than use third-party fact-checkers to determine the accuracy of content on its platforms, the CEO said, Meta will adopt a system similar to X’s Community Notes function, which crowdsources the work of debunking false claims. Meanwhile, the company will loosen its content filters to prioritize screening “illegal and high-severity violations,” such as terrorism, child sexual exploitation and fraud.
It remains to be seen how well Community Notes will combat misinformation. The idea has promise, but X, formerly known as Twitter, has seen mixed results with its model. Some critics say it has trouble keeping up with the torrent of false claims. It’s also unclear how Meta’s algorithms will promote content that previously would have been removed. Will they allow material to spread unchecked that is, for example, abusive to specific groups of people? Will content that is flagged as inaccurate be deprioritized?
The answers to these questions will have real-world consequences. Many studieshave found that frequent use of social media among adolescents is associated with more exposure to cyberbullying and, as a result, more suicidal thoughts. Allowing public scrutiny of Meta’s algorithms would help ensure that the company takes seriously any potential harms from its content.
Plenty of evidence shows that social media platforms create echo chambers that reinforce users’ worldviews. Internal studies conducted by Facebook itself and disclosed by whistleblower Frances Haugen in 2021, for example, suggest that, despite the company’s efforts to remove anti-vaccine content, misinformation about coronavirus vaccines on the platform proliferated.
Studies such as these illustrate how much insight can be gleaned from the data that online platforms collect. In the right hands, this data could help society identify and cope with the side effects of social media use. Imagine, for example, if public health researchers were able to examine how vaccine-hesitant people consume information and which messages resonate with them. They might be able to develop better strategies to meet vaccine skeptics where they are, and thus combat misinformation more effectively than content moderation does.
A few years ago, a bipartisan group of lawmakers in Congress proposed legislation to compel platforms to share their data with university-affiliated researchers while also devising privacy and cybersecurity standards for the process. Such an idea is worth revisiting. Lawmakers should also consider requiring social media companies to be more transparent about their algorithms, perhaps by subjecting them to oversight.
Experience has shown that social media companies cannot effectively weed out all bad content on their platforms. This is not to say their efforts have been wasted, only that, even with multimillion-dollar investments, there is a limit to what can be done.
If Meta and other social media companies want to rebuild trust with their users, openness is essential. Though it has become less fashionable to acknowledge the good that social media can do, there was once much optimism that it would actually improve society. With transparency that reaches to the foundations of these platforms, such bright potential might be imaginable again.
The Washington Post