A coalition of activists on Tuesday sent a letter to the CEOs of Meta, Reddit, Google and X, as well as eight other tech executives, urging them to adopt more aggressive policies that could stem the flow of dangerous political propaganda. asked to do so.
These additional measures will be crucial in 2024 given that national elections will be held in more than 60 countries, the group is accused of in a letter, a copy of which was obtained exclusively by The Technology 202.
“There are so many elections happening around the world this year, and social media platforms are one of the most important ways people are staying informed,” he said. Nora Benavidez, senior advisor to digital rights organization Free Press. Companies need to “enhance platform integrity measures at this time.”
Groups including civil rights group Color of Change and LGBTQ+ advocacy group GLAAD also called on tech giants to strengthen their policies on political advertising, including banning deepfakes and labeling AI-generated content. .
Advocates have been warning for months that the rise in AI-generated audio clips and videos is already disrupting elections around the world. For example, politicians could dismiss potentially deadly evidence, such as secret hotel meetings or recordings criticizing their opponents, as AI-generated fakes. And experts say AI risks could have real-world harm in politically volatile democracies.
Tech companies such as Meta, Google, and Midjourney claim to be working on systems to identify AI-generated content with watermarks. Just last week, Meta announced that it was expanding its AI labeling policy to apply to a wider range of videos, audio, and images.
But experts say tech companies may be able to catch all the misleading AI-generated content circulating on their networks or modify the underlying algorithms that spread some of these posts in the first place. states that it is low.
When using social media in a typically passive way, “people are…not as vigilant,” Benavidez said. “That's one of the problems.”
“Social media has diminished our curiosity and increased the siled echo chamber effect,” she added.
The group also called on technology companies to be more transparent about the data that powers their AI models, saying policies and systems aimed at combating political misinformation have weakened over the past few years. I blamed him for that.
For example, X rolled back some of its rules against misinformation and allowed far-right extremists to return to the platform. Meta offers users the option to opt out of the company's fact-checking program, which allows debunked posts to receive more attention in news feeds. YouTube has reversed its policy banning videos that falsely promote the idea that the 2020 election was stolen from former President Donald Trump, and Meta has begun allowing such claims in political ads.
Meanwhile, mass layoffs at Company X (formerly Twitter) and other major technology companies have decimated teams dedicated to promoting accurate information online. And an aggressive conservative legal campaign led the federal government to stop warning tech companies about foreign disinformation campaigns on social networks.
Activists argued that dangerous propaganda on social media could lead to extremism and political violence if tech companies did not act proactively.
“It is beyond impossible that even more convincing misinformation will emerge in the form of deepfakes,” said Meta Whistleblower. Francis HaugenThe group Beyond the Screen signed the letter. “Even though we don't want to believe that large-scale violence is possible within the United States, countries with far weaker democracies are equally vulnerable to all of this manipulation.”
Biden Administration Announces Major Expansion of Arizona Chip Facility (Written by Matt Visser)
Sen. McConnell urges action on bill to restrict TikTok (Axios)
Lawmakers unveil sprawling plan to expand online privacy protections (by Cristiano Lima-Strong)
House Democrats continue to use TikTok despite voting against it (Politico)
Snapchat makes changes to new rankings feature after parents' concerns (Axios)
Truth Social lost $58 million last year. Here's who made the money anyway. (Written by Drew Harwell)
In the 2018 crash, Tesla's Autopilot only followed lane markings (Faiz Siddiqui and Trisha Thadani)
OpenAI prepares to fight for its life as legal challenges mount (Cat Zakrzewski, Nitasha Tiku, Elizabeth Dwoskin)
Maryland passes two major privacy bills despite tech industry pushback (New York Times)
Russian trolls target U.S. aid to Ukraine, Kremlin document shows (by Katherine Belton and Joseph Meng)
Meet the 25-year-old who joined RFK's campaign team on TikTok (by Taylor Lorenz and Meryl Cornfield)
The AI deepfake apocalypse is here. These are ideas to combat it. (Written by Gerrit de Vink)
- FCC Chairman jessica rosenworcel Net neutrality discussed at event in Santa Clara, Calif., Tuesday at 2:15 p.m.
- Georgetown University School of Law will host the event “Global Perspectives on AI Governance” on Wednesday at 3 p.m.
- The Knight Georgetown Institute will host the event “Burning question: Online Deception and Generative AI” on Thursday at 11 a.m.
- The House Energy and Commerce Committee will hold a hearing on “Where Are We Now: Section 230 of the Communications Decency Act of 1996'' on Thursday at 1 p.m.
that'That's all for today — thanks for joining us! Tell others to subscribe of technology 202 here.Please contact Cristiano (email or Social media) and will (email or Social media) Please give us any tips, feedback, or say hello!