Nicola Lacetera is Professor of Strategic Management at the University of Toronto Mississauga and Rotman School of Management.
The development and rapid improvement of artificial intelligence capabilities over the past 15 years has brought even more attention to a long-standing and alarming technological approach.
“Don't ask for permission, ask for forgiveness”: This phrase, attributed to the late Admiral Grace Hopper, has long been popular in Silicon Valley. Innovators cannot afford to waste time waiting for clarity on regulatory requirements, let alone ethical considerations for their products. New technologies evolve rapidly and are naturally beneficial, and bureaucratic delays are a cost to humanity. Breaking things allows you to move faster.
In fact, this story predates the digital economy. In 1970, Nobel Prize in Economics winner Milton Friedman argued that the social responsibility of businesses is to maximize profits. Correcting distortions such as social or environmental damage depends on the goodwill of governments and shareholders. The penetration of these ideas into society culminated with the advent of the Internet and the rise of big technology companies.
By reducing the cost of accessing, storing, and sharing information, the World Wide Web has emerged as a powerful, positive force that expands opportunity for all. For example, holding technology companies accountable for the content posted on their platforms, or limiting the size and influence of a single company, could quell this storm of disruption and make the world It will hinder technology entrepreneurs' quest to make the world a better place.
Government overregulation could jeopardize Canada's artificial intelligence potential
Concerns about the abuse of market power become less and less relevant, sharing personal data in exchange for “free” services becomes a matter of individual responsibility, and in the name of free speech there are growing concerns about what can be posted online. Any restrictions have become taboo.
That's where AI comes in. For many business leaders, intellectuals, and academics, AI is nothing more than a tool with unprecedented predictive and now also generative capabilities. It is also a “universal” technology, as it is widely applied in many industries, as is electrical. And who would want a world with limited electricity use?
However, AI is not electricity. Electricity does not learn or predict your preferences or behavior, nor does it generate text, programming code, songs, or images. Because their revenue depends on advertising (from Facebook to YouTube), online platforms that benefit from more user engagement pay more attention to news that supports their ideological beliefs, and when news AI has been used to capitalize on people's tendency to be more proactive. I feel excited and angry.
They achieved this by accurately predicting user preferences (and weaknesses) and creating information bubbles. Demagogues and dictators have used these strategies to spread misinformation, polarize, and ultimately poison public debate, leading to attacks over the past few years on everything from elections to attacks on democratic institutions. It has influenced the outcome of the most important events.
Emerging and rapidly improving AI generation capabilities are enabling the dissemination of information and images that are fabricated but resemble reality in form and content. Imagine what demagogues and dictators can do with these tools, combined with a platform that enables the massive dissemination of information that most people cannot tell is true or not.
Imagine these tools in the hands of a child pornography organization. Now, photos of high school girls have been turned into equally surreal nude versions and circulated on the web (98 percent of all deepfake videos of her include pornographic content, including 99 percent contain pornographic content). Of these, 10 cents are women and many are minors).
Eventually, in a fairly optimistic scenario, the misinformation will be uncovered and the false material removed. But by the time “eventually” comes, the fate of democracy will be decided and people will suffer long-term trauma.
In the face of these challenges and the potential for long-term and difficult-to-reverse effects, the European Union’s recently approved EU AI law seeks to overturn the adage that “an apology is a license.” Details are important, of course, but the overall message is clear. Viewing AI as a neutral, general-purpose technology that is promoted without predefined constraints is outdated and inappropriate.
The law allows people to manipulate behavior to prevent informed decision-making, classify and score people based on sensitive characteristics, or cause harm to vulnerable groups (e.g., based on age or disability). AI systems that may add or add information are to be prohibited. exceeds the claimed benefits. Information about whether the text or image is generated by AI must be clearly provided.
In these and other fields, we cannot afford to wait for a posthumous apology of the kind we saw in the US Congress in response to evidence of suicide caused by social media interactions. The value of sacrificing some productivity gains from unlocking AI (which, according to leading academics, are more hypothetical than real) to protect individuals, society, and democracy as a whole. there is.
Another catchy adage in the North American technology industry is “America invents, China replicates, Europe regulates.” Well, thank God for Europe.