Amber Sinha is a Tech Policy Press fellow.
This is the fifth issue of our Dispatch on Technology and Democracy, which explores the use of artificial intelligence (AI) and other technologies in India's general elections. In previous issues of this dispatch, we have discussed the Voluntary Code of Ethics (Code) formulated by the Election Commission of India (ECI). This is a key regulatory provision for political advertising on internet platforms during elections in India. This dispatch examines the norm in more detail.
Attention: Voluntary Code of Ethics
In March 2019, just before the general elections, the ECI invited several internet companies, including Google, Facebook, and ShareChat, to participate in the development of a voluntary code of ethics to ensure the integrity of the electoral process. . Recognizing the increasing influence of social media on electoral processes, the Code aims to “identify countermeasures”.[…] It was introduced to increase confidence in the electoral process. ” These companies worked on this process under the guidance of the Internet and Mobile Association of India (IMAI), which submitted the norms to the ECI in March 2019. After the 2019 elections, IAMAI agreed to abide by the Self-Regulatory Code going forward. state and national elections.
The main focus of this code was to increase transparency in paid political advertising. It placed political advertisements on social media platforms such as Facebook and Twitter under the Model Code of Conduct (MCC). This self-regulation allows the ECI to utilize section 126 of the Representation of the People Act 1951 to notify platforms of violations of the Code. This established a direct line between ECI and the platform, allowing notification and prompt removal of violating content. The platform will remove any content reported to him within three hours during his two-day non-campaigning “quiet period” before the vote, and he will be required to provide a report on that conduct to IAMAI and his ECI. there is. The companies will also help provide information on election issues, run educational campaigns to raise awareness about election laws, and provide platform-specific training for ECI nodal officers who act as liaisons between governments and the ECI. also agreed.
Related books:
Similar to newspaper and radio advertising, the Code required parties to disclose spending accounts for social media advertising. Advertisers submit advance certificates issued by the ECI and the Media Certification and Monitoring Commission (MCMC) for election advertisements featuring the names of political parties or candidates for the upcoming elections. Candidates were also asked to submit their social media account details while submitting their nominations. Candidates and political parties were also required to declare their spending on social media, which became part of overall spending limits.
Neither the ECI nor the Representation of the People Act provides a definition for the term “political advertising.” How political advertising was defined and managed was largely left up to different internet platforms. For example, Google includes his four types of users whose ads fall under the scope of political ads. This includes (1) political parties, (2) businesses, (3) nonprofit organizations, and (4) individuals. The criteria for classifying an ad as political was that it featured a political party.
Meanwhile, X, formerly known as Twitter, defined political ads as ads that are purchased by a political party or candidate, or that advocate a clearly identified candidate or political party. X banned political advertising in 2019 after the microblogging site faced criticism for failing to curb the spread of misinformation during the election, but this was significantly relaxed in 2023.
New ECI guidelines require disclosure of spending on social media advertising by political parties and candidates, but do not cover ads purchased by non-party members, such as “supporters” or “well-wishers” of party members who may not have direct knowledge. not covered. Associated with either a political party or candidate. The MCC and voluntary regulations do not adequately regulate this area. Platforms have stepped in to fill this gap using their terms of service policies, requiring disclaimers on paid ads, removing ads that should include disclaimers but don't, and We maintain a public archive of these advertisements and the costs incurred by purchasers.
However, these measures remain insufficient to identify all types of political content and actors and are only partially enforced. For political content that falls under the Indian Penal Code (IPC), only a minor penalty of ₹500 will be imposed for illegal payments made by a person who pays election expenses on behalf of a candidate without written authority . This fine is significantly lower than the equivalent amount spent on platform advertising.
More importantly, this self-regulatory provision completely leaves substantive campaigning on social media unregulated, i.e. proxy advertising where political parties or candidates do not directly fund advertising. Advertising is often not paid directly by political parties, but through networks of supporters.
Other developments
The Free Speech Collective has released a report examining free speech violations in the first four months of 2024 in the run-up to the general election. The report documents 134 violations of free speech, ranging from censorship to attacks, arrests, harassment, online speech violations and sedition incidents. For example, it details instances where social media accounts, web links were blocked and the internet was cut off, especially during his February farmers' protests.
Boom Live's Karen Rebelo X's Crowdsourced Fact-Checking Program Community Notes. This feature allows 'Posters' to write notes about Tweets that they believe are misleading or missing important context. The program uses a bridging algorithm to determine whether a note is displayed with a tweet. Unlike engagement-based ranking systems, where popular content gets the most attention, bridge-based ranking is a social media algorithm that ranks content in a way that aims to build trust and understanding between different viewpoints. It relies on identical feedback from other users, who tend to disagree with past assessments, and provides content that helps opposing sides understand each other and bridge the gap. Rebelo noted that in many cases, the platform didn't show notes about completely false tweets because they didn't have enough “helpful” ratings. Rebelo cites experts who have criticized the fully crowdsourced and automated fact-checking model used in Notes.
Reporters Without Borders (RSF) and Network of Women in Indian Media (NWMI) trained 60 women journalists in election reporting across India. The training focused on combating disinformation, practicing solutions journalism, and adopting a comprehensive approach to ensuring diverse and reliable coverage of the current general election. In this workshop, we learned fact-checking techniques, including using artificial intelligence (AI) to identify viral fake news and deep fake news. Participants were also provided technical training in processing, analyzing, and presenting quantitative information about elections, with a focus on campaign finance research.
additional reading material
- In The Walrus, Mitali Mukherjee writes extensively about the use of deepfakes and AI in India's elections.
- Srishti Jaiswal analyzes the BJP's use of WhatsApp groups in other regional election campaigns.
Please give us your feedback. Is there a story or important development we missed? If you have any suggestions, comments, or critiques, please contact us at contributions@techpolicy.press.