WASHINGTON — On Tuesday, leaders of a diverse global coalition of more than 200 civil society organizations, researchers, and journalists called for six interventions that Big Tech platforms should adopt to protect the integrity of the 2024 elections. was reiterated.
Earlier this month, the coalition sent a letter to executives of 12 platforms, giving each company until April 22 to respond and indicate whether they would adopt the intervention. At today's press conference, coalition leaders discussed comments from eight large tech companies that responded. Four fringe platforms, Twitter, Discord, Rumble, and Twitch, did not respond and continued to evade public transparency and accountability for how they address the spread of disinformation in this election year.
In the coming days, Free Press will publish details of the companies that responded and share analysis based on years of monitoring Big Tech platforms and scrutinizing their efforts. Preliminary findings are shown below.
“Social media companies must redouble their efforts on election integrity because generative AI can intensify hate and lies that endanger democracy and public safety.” Jessica J. Gonzalez, Free Press Co-CEO, organized a coalition behind the platform's demands. “Instead, many companies are once again supporting white supremacists and election conspiracy theorists, refusing to report Big Lie content as false despite undermining trust in our democratic institutions, and refusing to report abusive content as false.” We fired a large number of staff responsible for suppressing AI and adjusting content across languages.” The complete failure of platforms like Twitter, Rumble, Twitch, and Discord to respond to our requests shows a complete disregard for the precarious state of democracies around the world. ”
“2024 is a turning point for democracy,” he said. Maria Ressa, Nobel Peace Prize Winner, Rappler CEO, Real Facebook Oversight Board Member. “There is a concerted effort to silence reasoning based on facts and evidence… There is so much at risk, because it’s not just the companies themselves. Geopolitical warfare is exploiting weaknesses that have not been previously addressed. We call out the enlightened self-interest of these technology companies. The way to protect your business is to ensure a healthy public information ecosystem. It's about making sure that voters actually have ownership and that we have the facts to make the right choices.”
“Working at Twitch has shown me what is truly possible, especially across the industry, and that companies have the ability to implement and enforce election integrity efforts,” he said. Ta. Annika Collier Navaroli, former Twitter and Twitch policy director. “The experience of the past two elections has shown us how important it is for the civil rights community, researchers, and journalists to collaborate with technology companies, especially in the midst of historic global elections in 2024. need to cooperate meaningfully with interventions such as those proposed in the coalition letter.”
“Since 2014, we have seen an epidemic of disinformation that disproportionately targeted the Latino community,” he said. Maria Teresa Kumar is President and CEO of Voto Latino. “We are reminded that nefarious activities that often target Latino communities begin, incubate, and then proliferate everywhere else… In Spanish and Spanglish, Many fact checkers are not removing information fast enough, infecting key voters…and we are in key battleground states where disinformation is real and fear mongering is real. We're seeing this… That's why we're still calling on these platforms to strengthen their practices in moderating and enforcing non-English content.”
“No moderation failure is more important to the rise of authoritarian movements, extremism, hate speech, and the accompanying real-world violence than loopholes.” [tech companies have] “When you think about people who are politically powerful for reasons like 'newsworthiness' and the excuse that their comments are in the public interest.” Heidi Beirich, co-founder of the Global Project Against Hate and Extremism. “Their political advertising is also not controlled in any way and is another place where this kind of abuse is planted… Incendiary statements from political leaders make political violence more likely. There's a ton of research that suggests it could be…that's why it's important that technology companies treat all users equally and enforce content moderation rules on public figures and politicians. . ”
“Responses from these companies should not be taken as an exercise of their own accountability.” Nora Benavidez, Director of Digital Justice and Civil Rights, Free Press. “In previous election cycles, platforms have promised to protect elections and platform integrity, only to turn off key features and interventions that cause real-world damage. Coalitions like ours are essential to act as a watchdog in avoiding liability to users.”
Free press highlights from platform reactions:
- Eight of the 12 platforms responded in writing to the coalition's request.
- Four fringe platforms (Twitter, Discord, Rumble, and Twitch) did not have any substantive response to the coalition.
- None of the platforms committed to adopting all six of the coalition's demands.
- None of the platforms provided a timeline for fulfilling their commitments.
- None of the platforms are committed to moderating Big Lie content.
- Only TikTok promised to hire more human reviewers. Meta, Twitter, YouTube, and other platforms have laid off tens of thousands of key staff over the past 18 months.
- The majority of the eight platforms that responded expressed a commitment to mitigating hate and lies in languages other than English.
- TikTok announced that it will invest $2 billion in trust and security in 2024 and will moderate it in 50 languages.
- Google and YouTube announced they will moderate content in English, Spanish, EU and Indian languages. They did not undertake to review content in Asian or African languages.
- All responsive platforms except TikTok claim to hold VIP accounts to the same standards as other users. Speakers on the conference call expressed doubts that this is happening now, and we will be monitoring this.
- Some, but not all, platforms are working to ban deepfakes and other misinformation in political ads.
- Meta said it would label political ads with AI, but did not commit to human review of political ads.
- Reddit and Snapchat have pledged to ban lies in election ads and to conduct human reviews of all political ads.
- We need more information about political advertising on Google and YouTube, but their efforts appear to be weak.