New Delhi : Big tech companies such as OpenAI, Google, and Meta have been asked to tackle deepfakes and election-related issues on online platforms after meeting with officials from the Ministry of Electronics and Information Technology (Meity), including ministers Ashwini Vaishno and Rajeev. They agreed to add checks and balances to the discussion. Chandrasekhar. At least three officials close to the development said. mint The meeting said tech companies were given “informal instructions” to tackle the problem of misinformation by suppressing the output of artificial intelligence (AI)-driven content and “sensitive keywords.” There is.
Big tech companies such as OpenAI, Google, and Meta have come forward after meeting with officials from the Ministry of Electronics and Information Technology (Meity), including ministers Ashwini Vaishno and Rajeev Chandrasekhar, to tackle deepfakes and other issues on online platforms. They agreed to add checks and balances to election-related discussions. . At least three officials close to the development said. mint The meeting said tech companies were given “informal instructions” to tackle the problem of misinformation by suppressing the output of artificial intelligence (AI)-driven content and “sensitive keywords.” There is.
Federal IT Minister Mr. Vaishnau acknowledged the meeting as follows: mintThis is your question.
Hello! You are reading a premium article! Subscribe now to read more.
Subscribe now
Premium benefits
35+ Premium daily articles
specially selected Newsletter every day
access to Print version for ages 15+ daily articles
Subscriber-only webinars by expert journalists
E Articles, Archives, Selection Wall Street Journal and Economist articles
Access to subscriber-only benefits: Infographics I Podcast
Well-researched to unlock 35+
daily premium articles
Access to global insights
100+ exclusive articles from
international publications
Free access
3 or more investment-based apps
trendlin
Get 1 month of GuruQ plan for just Rs.
finology
Get one month of Finology subscription free.
small case
20% off all small cases
Newsletter exclusive to 5+ subscribers
specially selected by experts
Free access to e-paper and
WhatsApp updates
Federal IT Minister Mr. Vaishnau acknowledged the meeting as follows: mintThis is your question.
Since participating in these meetings, both Google and Meta have announced that they have been modified by AI on intermediary AI platforms, search AI platforms, and conversational AI platforms such as ChatGPT, Facebook, Gemini, Google Search, Instagram, WhatsApp, and YouTube. We have published a memo regarding our content and advertising initiatives. Each of these companies is encouraged to take a “precautionary” approach to AI-generated information, including clearly labeling such content on political ads and , including limiting the ability of AI to generate search results about key political figures, political parties, or opinions related to upcoming policies. 2024 general election.
Adobe, the US-based company behind Photoshop, one of the world's largest creative visualization tools, also uses its generation tool Firefly to manipulate images that could be used in political campaigns. or how to create one, he said. Andy Parsons, senior director of content authenticity initiatives at Adobe, said in an interview. mint.
The Center also noted that if the above-mentioned intermediaries (other than Adobe), which enjoy safe harbor protection as defined in the Information Technology (Intermediaries Guidelines and Digital Media Code of Ethics) Rules, 2021, fail to do so, They also discussed how they might be held accountable for prosecution. Reduce the spread of AI-powered misinformation across internet platforms. The discussion was held in light of the steady proliferation of AI in commonplace content on the internet, with top technology companies racing to be first to introduce technologies such as metadata watermarking and tagging, cited above. said all the people involved.
The center's ability to push for censorship of specific keywords comes amid “a growing understanding of the impact that AI can have on public discourse,” Adobe's Parsons said. Stated. “We are only now beginning to realize how the Munich Accord could affect Big Tech and elections. It could help inform government decision-making about how this can be addressed.”
On February 16, 20 companies, including Adobe, Google, Meta, Microsoft, OpenAI, and X (formerly Twitter) signed the Technology Agreement to Combat Deceptive Use of AI in 2024 Elections. The agreement, signed at the Munich Security Conference, is part of eight key agreements between technologies to “deploy technologies that reduce the risks associated with deceptive AI election content, understand the risks, detect circulation, We proposed “Evaluation of AI Models to Promote Resilience Across Industries.” companies.
Election-specific strategy disclosures published by Google and Meta on March 12 and March 19, respectively, provide further details about the agreement. In a post purported to be from the “Google India Team,” the tech company said it would be required to disclose when AI is used for political advertising, label AI-generated content on YouTube, and identify content that has been modified. mentioned the use of digital watermarks to identify. The same post said that Gemini, the company's generative AI platform, “has begun implementing restrictions on the types of election-related queries that Gemini responds to.”
Meta said in a post that it operates 12 fact-checking teams that independently verify AI-generated content, and that altered political content will be restricted across its platforms. “If your content is rated as 'altered' or detected to be nearly identical, it will appear lower on Facebook. It also significantly reduces content distribution. On Instagram, changed content will be less visible in your Feed and Stories. This will significantly reduce the number of people who will see it,” the post says.
There was no response from either company. mintsent an email request seeking details of the meeting with Mithi officials and ministers.
Senior legal and policy experts have said that existing provisions under both the IT Rules, 2021 and the Indian Penal Code (IPC), depending on the issue at hand, will limit Big Tech as a company and users promoting such content. He said that it may apply to both.
“If intermediaries face court orders for failing to proactively curb AI-driven misinformation on their platforms, they could face Rule 7 of the IT Rules, 2021 .Therefore, the fight against AI misinformation during elections will be redirected from “responsibility to accountability for these companies,'' said N.S., senior Supreme Court counsel and founder of the Cyber Thirty Foundation.・Mr. Napinay said.
A senior partner at a major law firm, who requested anonymity because the firm represents one or more of the Big Tech companies mentioned, said It added that a key challenge was posed by a “comprehensive definition of an intermediary”.
“The lack of clear definition of platforms and intermediaries leaves our regulatory mechanisms with a broad-brush approach when it comes to liability and liability. “This could make it difficult to effectively and urgently suppress the situation,” the lawyer said.
Rule 7, cited above, provides that if a company fails to exercise due diligence to curb identity theft and various forms of manipulation, the company will be subject to “laws and the Indian Penal Code.”
Kazim Rizvi, founding director of policy think tank The Dialogue, said effective penalties that could help curb misinformation would require “a focus on creating new regulations. It added that “further efforts will be needed to enforce existing legal frameworks, rather than
“The current legislative environment already provides a comprehensive basis for dealing with deepfakes, including Regulation 3(1)(b) of the IT Rules, 2021. It is not harmful and has great potential in areas such as education, content creation, crime prevention and awareness for government programs. Overregulation could inadvertently limit these positive applications and reduce the broader benefits of AI-based technological advances. The key, therefore, is to operate existing legal structures seamlessly, strengthen law enforcement capacity, ensure platforms are compliant with regulations, and understand their role in identifying and reporting deepfakes. The aim is to educate the public and effectively build a more conscious and proactive digital society. It’s a community,” Rizvi added.