Some Chinese AI companies are already expecting to spend more time and money complying with the new EU rules, facing concerns that over-regulation could stifle innovation.
“The EU institutions may give people the impression of over-regulation,” said Tanguy van Overstraten, a partner at Linklaters and head of the firm's Technology, Media and Telecommunications (TMT) group in Brussels. “What the EU is trying to do with AI law is create an environment of trust.”
The AI Act sets out obligations for technologies based on the extent of their potential risks and impacts. The regulation consists of 12 main titles covering prohibited activities, high-risk systems, governance, post-market surveillance, information sharing, and transparency obligations for market surveillance.
The regulation also requires member states to establish so-called regulatory sandboxes and real-world testing at national level. However, the rule does not apply to AI systems or models (including their outputs) that have been specifically developed and operated solely for the purposes of scientific research and development.
If a company wants to test [an AI application] “In the real world, they can benefit from a so-called sandbox that can last for up to 12 months, during which they can test their systems to a certain extent,” Linklaters' van Overstraten said.
Failure to comply with rules banning certain AI practices could result in administrative fines of up to 35 million euros (US$38 million) or up to 7% of the violating company's total worldwide annual turnover for the previous financial year, whichever is greater.
“EU regulations on the quality, relevance and representativeness of training data require us to be even more careful in choosing our data sources,” said Dayta AI's Tu.
“Our focus on data quality will ultimately improve the performance and fairness of our solution,” he added.
Du said the AI law offers a comprehensive, user-rights-focused approach that “imposes strict restrictions on the use of personal data.” By comparison, “China and Hong Kong's rules appear to be more focused on enabling technological advances and aligning them with the strategic priorities of their governments,” he said.
More generally, AI models and chatbots should not generate “false and harmful information.”
“Chinese regulations require companies and products to adhere to socialist values and ensure that AI output is not perceived as harmful to political or social stability,” said Alex Roberts, a partner at Linklaters in Shanghai and head of the firm's China TMT group. “For multinationals who are unfamiliar with these concepts, this could cause confusion among compliance officers.”
He added that China's regulations are so far focused only on GenAI and are “viewed as more of a state- and government-led rulebook,” while EU AI law is “focused on user rights.”
Still, Roberts said the key principles of EU and Chinese AI regulation are “very similar,” including being “transparent to customers, protecting data, being accountable to stakeholders, and providing direction and guidance on products.”
“Currently, some governments [Asia-Pacific] “European regions are working on their own AI legislation, drawing heavily from EU regulations on data and AI,” said Linklaters' Roberts. “Firms could consider lobbying local government stakeholders to increase harmonization and consistency of rules across markets.”