Photo: Mark Zuckerberg/Facebook
When Mark Zuckerberg announced that Meta would end much of the company’s moderation efforts, like third-party fact-checking and content restrictions and replace it with a crowdsourced model, it became clear that the company was shifting the responsibility for identifying and combating misinformation onto its users, leaving them to navigate a vast sea of online misinformation. This wasn’t merely a product pivot; it was a retreat from accountability.
Days ahead of Donald Trump’s presidential inauguration, Zuckerberg outlined an overhaul of Meta’s content-moderation policy that will lead to a fundamental change in how misinformation is addressed on Meta’s platforms in the US, and perhaps eventually in the wider world.
Meta’s fact-checking programme was launched in 2016 after Russia allegedly used Facebook and other platforms to influence American voters during the elections that year. A 2023 statement from Meta said the fact-checking program had “expanded to include nearly 100 organizations working in more than 60 languages globally.”
While fact-checkers did not censor or remove posts, they added critical context and debunked false claims, ensuring users could make informed decisions. After a fact-checker has rated a piece of content as “False”, “Altered” or “Partly False”, it will receive reduced distribution on Facebook and Instagram. As recently as 2022, Meta was bragging about fact-checkers’ contribution in content moderation and that it had invested more than $100 million into global fact-checking. But then last week, fact-checkers were blindsided by Meta’s sudden decision, as many fact-checking organisations signed a new contract to work with Meta in 2025 just two weeks ago — only to wake up on January 6 to the news the programme was being scrapped.
US-first Policy
Meta’s partnerships with fact-checkers weren’t perfect, but they were essential. It was a way to make sure billions of users could navigate the digital world with a little more confidence. For years, it has been the safety net against harmful content, viral conspiracy theories, doctored videos, violent extremist content, and false and unscientific claims. Now beyond immediate job losses in the fact-checking community, Meta’s withdrawal weakens the global infrastructure to combat misinformation.
Meta is also discontinuing key Diversity, Equity, and Inclusion (DEI) programs, effective immediately. The company will dismantle its DEI team and cease efforts to source business suppliers from diverse-owned companies. Meta cited the evolving “legal and policy landscape surrounding diversity, equity, and inclusion efforts in the United States” as the reason for this shift.
This decision offers significant insight into both the future of social media and the shifting dynamics in which the Trump administration is likely to push tech platforms to align more closely with US political sensitivities, prioritising “American interests” over global diversity goals. While this repositioning may offer political or regulatory advantages within the US, it will also expose Meta and other tech companies to heightened international scrutiny and regulatory challenges.
The Big Gamble
Soon after Meta’s fact-check announcement, India’s Parliamentary Committee decided to summon Meta representatives over Zuckerberg’s recent comments about Indian elections stating that India is among “a ton of countries…[wherein] incumbents basically lost [elections due to their handling of Covid-19 pandemic].” Zuckerberg made this statement at the Joe Rogan Experience show podcast, which has, in the past, been accused of spreading misinformation. India was quick to react to this false statement.
“Misinformation on a democratic country maligns its image. The organisation would have to apologise to the Parliament and the people here for this mistake,” Nishikant Dubey, who heads the Parliamentary standing committee on Communication and Information Technology, said in a post on X.
In the European Union, there is already raging discussion on how to protect people from social media harm. The EU regulations require social media companies to take a more proactive approach in combating online harms, including disinformation. The EU’s Digital Services Act mandates platforms to swiftly remove illegal content and mitigate disinformation, with heavy penalties for noncompliance. Under this law, major social media platforms can face fines up to 6% of their annual global revenue for failing to remove illegal content, disclose moderation policies, or address the impact of disinformation.
In Germany, Meta faces some of the toughest regulations. In 2015, Facebook, like other platforms, agreed to remove hate speech within 24 hours of being flagged, as part of an agreement specific to the country. “During the US election campaign, we have already seen disturbing interference from Elon Musk on X, pushing algorithms with his own political beliefs,” Axel Voss, the German Member of parliament who authored the EU Parliament’s copyright bill, told me. “Together with colleagues across party lines we have sent several requests to the European Commission to assess such behavior against the rules of the EU Digital Services Act.”
What does it mean for Asia?
The Asia-Pacific region represents a big market for Meta, with over 1.4 billion monthly active Facebook users—accounting for 40 percent of the platform’s global user base. In 2023, the region generated $36 billion in revenue, contributing 26.8 percent of Meta’s total global earnings. To address the challenge of misinformation on its platforms, Meta has partnered with numerous independent fact-checking organisations across Asia. However, recent policy shifts have introduced uncertainty about the future of these collaborations.
Already, Asian countries such as Indonesia, Malaysia, Singapore, Australia and South Korea want social media platforms to take more responsibility for combating harmful content, disinformation, and other online harms. The growing pressure stems from the platforms’ significant influence on public discourse, national security, and social stability.
In 2018, there was backlash when Facebook was accused of playing a “determining role” in the spread of hate speech against Rohingya Muslims, who were victims of genocide in Myanmar, according to the United Nations.
The Indian government, in particular, views deepfakes as a growing threat and has urged tech companies to actively police deepfakes, proactively identifying and flagging misinformation, or content that impersonates individuals. India’s IT Rules require platforms to proactively address misinformation, under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, a social media platform with more than 5 million registered users is classified as the Significant Social Media Intermediaries (SSMIs). In addition to the due diligence requirements prescribed for all intermediaries, the SSMIs are required to meet additional due diligence requirements, including appointing a Nodal Contact Officer who is available 24/7, enabling the identification of the first originator of information on the platform, using technology-based measures to identify certain types of content, establishing a physical contact address in India and taking down content within 36 hours of receiving a court or government order and providing a grievance redressal mechanism for users or victims.
Political Pressures at Play
Let’s not ignore the elephant in the room: Politics. Meta’s decision isn’t just about simplifying processes or saving costs; it’s about surviving in an environment where political power shapes corporate actions. In response to Meta’s decision to end the fact-checking programme, US President-elect Trump remarked that Meta has “come a long way,” signaling his approval. This, just months after Trump had called for Zuckerberg to face life in prison for alleged interference in US elections. Meta’s decision to align itself with the incoming administration seems almost inevitable. It seems a financially and politically advantageous move, especially as Meta prepares to battle the US Justice Department in an April antitrust case that could force the sale of Instagram.
While Zuckerberg is trying to secure his company’s future, it’s the platform’s users who will bear the brunt of the fallout as the digital landscape grows increasingly divisive worldwide.