From “AI will save us” to AI apocalypse, recent debates around AI safety and regulation have raised questions about who can and should participate in the discussion. Interest in generative AI techniques and large-scale language models (LLMs) has further fueled the debate. Real-world harm This extends to what is known as “existential risk,” which is the “Existential Threat” To humanity. In May Science paper Highlighted the extreme risks and shortcomings of new governance initiatives. Open Letter It called on all AI labs to “immediately halt training any AI systems more powerful than GPT-4 for at least six months.” Ironically, many of the signatories to the letter are the architects of these AI technologies and continue to develop and invest in them.
However, this discussion of AI risks does not take into account the perspectives and experiences of those most affected by AI systems, particularly marginalized groups, people of color, women, non-binary people, LGBTQIA+, immigrants and refugees, people with disabilities, children, older people, and those with lower socio-economic status. When talking about AI risks, we also need to consider the broader social, environmental and human rights harms of AI. Including the concentration of power in a few corporations.
In 1990, Cherice Glendening wrote:“Memorandum Towards a Neo-Luddite Manifesto” In it, she praised the Luddites for their denunciation of laissez-faire capitalism for enabling “a growing fusion of power, resources, and wealth that is justified by an emphasis on 'progress.'” We argue that a technology-agnostic, neo-Luddite approach is paramount to countering the power accumulated by AI's designers.
Reclaiming stories about neglected perspectives
As human rights activists, we have seen AI systems as potentially harmful emerging technologies since the mid-2010s, for example with the expansion of AI. Algorithm-Driven Risk Assessment Tool Criminal justice and Facial RecognitionMany of us have been committed to investigating, exposing, and preventing the harms of AI, including working to outright ban tools that disproportionately impact marginalized groups, such as facial recognition and other AI-driven surveillance technologies. Pushed back We oppose AI hype and “techno-solutionism” (the belief that technology can solve every social, political and economic problem).
Fast forward to today. We are rethinking the narrative around AI safety and regulation, aiming to reclaim them while imagining a future where social justice and human rights take precedence over technological goals. Together with civil society and affected communities, we are working to address space, Trapped in the overblown AI debatemust be completely divorced from the dominant AI narrative.
for example, Climate Justice Considering the enormous energy consumption of large language models (LLMs), ConsumeTechnology companies that pursue the public interest Surviving takeovers by a few giant companies What about the US, Western Europe and China, which have a monopoly on computing power, AI chips and infrastructure?
The power dynamics between the industries that underpin AI today are not fundamentally inevitable, which is why scholars, activists, and practitioners such as Joanna Varon, Sasha Costanza-Chock, and Timnit Gebru are urging AI advancements. Federated AI Common EcosystemIn other words,
“It is characterised by consensus-based community and public stewardship of data; decentralised, local and federated development of small-scale, task-specific AI models; worker cooperatives for properly remunerated and dignified data labelling and content moderation work; and ongoing attention to minimising the environmental footprint and socio-economic and environmental damage of AI systems.”
Finding alternatives requires political imagination Humanitarian The future. Putting human rights at the center is part of protecting our ability to define that future on our terms.
AI-related human rights violations
Until recently, efforts at AI governance have focused on the harms that civil society and experts have documented over the past few years. Increased transparency and accountability Similarly Asking for a ban or suspension Algorithmic biometric surveillance. Examples of such real-world harms include: AI Tools in Social Welfare It threatens people's right to important public services, Facial Recognition Technology The introduction of AI in public spaces will lead to the restriction and violation of human rights and civic space. Today, Israel is using this technology and AI-enabled systems to: Target Identification The pace of killing is accelerating in Gaza.
Recent legislative measures (e.g. EU AI Law, Council of Europe AI Treaty, US Executive Order on AIand G7 Hiroshima AI Process) These exceptions are either not strict enough or are too broad in their exemptions for the most egregious harms. They are driven by national security concerns and a sense of imperative for widespread use of AI, what some scholars call cyclical AI fads. “Automation fever.” Rather than stepping in and getting in the way of the path they have paved for us, we must find ways to make our coexistence with them more acceptable.
The accumulation of harms that leads to a steady and transparent violation of human rights has been building up over time. Our everyday acceptance of certain deployments and uses of AI like these: Created a tolerant operating environment Acceptance of the core logic underlying the design of AI. There are several reasons to resist this trend. First, surveillance is an essential feature of many AI models, meaning that some degree of privacy violation will always occur. Second, the output of AI systems, including generative AI, is precisely Distorted and colonial, Patriarchy, misogynyand Full of fake information Firstly, AI is based on a hierarchy of trained knowledge. Finally, the output of AI systems is often seen as a prediction, even though it is at best a questionable “estimate”. Often based on unreliable and inadequate logic and data sets (There's a saying in computer science: “Garbage in, garbage out.”)
Towards a human rights-based approach
A human rights-based approach to AI governance would provide the most widely accepted, applicable, and comprehensive set of tools to address these global harms. Algorithm-driven systems do not necessarily warrant new rights or entirely new approaches to governance, especially when the only change is that data-driven technologies are being deployed with unprecedented speed and scale, amplifying existing human rights risks.
There is no need to reinvent the wheel when regulating the design, development and use of AI. Policymakers should apply existing international human rights standards in the context of AI and respect democratic processes. In March 2024, the UN General Assembly passed a non-binding Resolution A/78/L.49It calls on countries to “prevent harm to individuals caused by artificial intelligence systems and to refrain from or cease use of artificial intelligence applications that are unable to operate in accordance with international human rights law.”
The actual human rights are International Covenant on Civil and Political Rights And that International Covenant on Economic, Social and Cultural Rightsis particularly important in assessing the positive and negative impacts of AI systems on human rights, including privacy, dignity, non-discrimination, freedom of expression and information, freedom of assembly and association (including the right to protest), and economic and social rights. Major International Human Rights Instruments They focus on the specific needs and rights of marginalized groups, including women, people of color, immigrants and refugees, children, and people with disabilities, and should be used as a blueprint for centering these groups in AI governance.
Moreover, we view procedural rights as foundational to effective AI governance: these are non-negotiable first principles: For example, any human rights restrictions in the development and use of AI must be based on a legal basis, have a legitimate objective, be proportionate and necessary. United Nations Guiding Principles on Business and Human RightsAdditionally, AI developers and adopters will need to conduct human rights due diligence (including human rights impact assessments) – a responsibility that applies throughout the entire lifecycle of an AI system, from the design stage through post-deployment monitoring and potential discontinuation.
Mandatory transparency is no longer a matter of debate; it is fundamental to enable effective access to redress and accountability. What is still missing is an outline of the contours of such transparency, which should be crafted with input from civil society and affected communities to ensure it is meaningful in practice. An appropriate AI governance framework should further include provisions for accountability and redress, enforced through oversight bodies, judicial mechanisms and dispute resolution processes. Finally, engagement of external stakeholders, especially civil society and marginalized groups, should be mandatory throughout the AI lifecycle. Meaningful engagement requires capacity building, access to information and adequate resources.
***
“AI safety” can only be a truly worthy goal if it prioritizes the safety of all groups. Currently, this concept distracts from the fact that AI will not affect everyone equally, and will have the greatest impact on already marginalized groups.
We must collectively understand and examine how the narrative of AI safety, under the guise of efficiency, convenience, and security, promotes, conceals, and reinforces violence. We must also draw the line at areas where the development and deployment of AI is entirely unjust and unwelcome, drawing on Glendening’s memo.
Our work must be centered on imagination. how Things could change. What if the enthusiasm and resources spent on AI were redirected towards health and social welfare programs? How much better would communities' lives be if funds spent on automated police were invested in justice and reparations? What if the water consumed by data centres was returned to indigenous communities? What if we had the right to opt out of the rapid advance of datafication and could make meaningful choices in many aspects of our daily lives? want How to tackle it digitally?
Civil society activism often forces us to respond to immediate problems, leaving little space to imagine and build alternative futures that are not dominated by technology. We urgently need more space to dream and create these visions.