Guest author: Lucas Bonatto, Director of AI/ML at Semantix.
Artificial intelligence (AI) has made monumental strides in industries worldwide. It is at once creating opportunity (for instance Netflix’s recent $900,000 salary job posting for an AI-focused product manager), as well as causing fear (as evidenced by luminaries such as Steve Wozniak, Elon Musk and over 1,300 industry experts signing an open letter to halt AI development for six months).
As companies in the United States come up with their own terms for monitoring and controlling the technology, the European Union is actively working on regulating AI across its nations. The goal is to ensure AI’s responsible development and use while promoting innovation and competitiveness.
Here’s a close look at some key developments that European entrepreneurs should know about AI regulation.
EU Passes Landmark AI Law
In June 2023, members of the European Parliament (MEPs) voted for the European Commission EU AI Act proposed in April 2021. Their justification? To establish a harmonised regulatory framework for AI systems across the EU member states.
It outlines four categories of AI systems: prohibited AI practices, such as social scoring; high-risk AI systems, including those used in critical healthcare and law enforcement infrastructure; generative AI; and limited-risk AI. As the risk level heightens, companies will face increased scrutiny and regulated assessments before reaching the market to ensure the safe use of AI in the region. Five key principles include:
- Safety, security, and robustness
- Transparency and explainability
- Accountability and governance
- Contestability and redress
The most important regulations have to do with ensuring the models are not aggravating social problems by practising unfair bias in their predictions. The risk of not regulating AI can lead to manipulation of public opinion and mistrust.
Goodbye, Generative AI?
Large language models (LLMs) and other generative techniques will likely need help to comply with robustness and transparency regulations.
Say a startup creates a sophisticated text-generating AI system that can write in a human-like manner on arbitrary topics. Eager to monetize it, they build an app that generates on-demand customised news articles. It gains popularity as an entertainment tool to impress friends.
However, without scrutiny, the generative AI inherited unintended biases from its training data, which included some problematic online sources. The system begins skewing generated text based on race, gender, and other attributes in unfair and misleading ways.
For this reason, Causal AI — AI which can explain cause and effect — is at the cutting edge of “XAI” (eXplainable AI) and AI fairness. Unlike conventional AI, which trades off transparency for accuracy, causality-based models deliver high performance and explainability. Governing bodies can question them to explain how they generated specific outputs and can, therefore, easily evaluate them to ensure fairness and impartiality.
This type of artificial intelligence system is rapidly growing for establishing cause-and-effect relationships between variables, ensuring the safety and fairness of AI predictions. Causal AI is potentially the only technology that can reason and make choices as humans do. That’s because it utilises causality — the principle that everything has a cause — to go beyond narrow machine-learning predictions.
The Impact on Innovation and Competitiveness
The good part of the EU AI Act is that there are different treatments for different categories of AI based on how much potential damage can be introduced by the systems. Identifying and restricting AI based on risk level can positively impact companies by attracting conservative investors who will be more comfortable investing in the space given the regulations. On the other hand, it could introduce difficulties for more disruptive companies that are trying to challenge the status quo.
Companies with large pockets to invest in legal counselling and who work together with regulators can ensure innovation by adjusting the law when appropriate. Nonetheless, it presents a new issue of increased market concentration as it suppresses the disruptive innovation potential of startups with limited resources.
That’s why, when not done right, strong regulation scenarios can become a bottleneck for innovation. Leading developers and regulators in Europe must work closely during the rollout of regulations to ensure that existing inefficiencies in legal systems will not impact the progress of the technology. This means they need to be constantly vigilant to make sure they’re regulating what needs to be regulated at the right time of technology maturity.
Looking Ahead for Startups
Startups must understand the EU AI Act inside and out, and know where they fit into the regulatory framework. In the face of conflicts between their ideas and regulations, the first step is due diligence to ensure that there’s no workaround within the existing terms.
AI developers who are sure that their products will positively impact the world can operate in an experimental manner, following the path of research. Making progress in a controlled environment allows them to provide the evidence to search for and receive buy-in from investors. When they demonstrate their cause, investors can support them with the necessary resources to build a case and evolve the regulation.
As society progresses in understanding AI technologies, the trend would be for relaxing some bans or adapting them to be less restrictive for specific use cases.
The EU will continue to spearhead regulations with the AI Act, determining a tiered approach to regulate AI from the highest to most limited risk. Although companies continue challenging the act’s impact on innovation, the EU’s mission to strengthen social safety, transparency, and accountability can’t go unnoticed.
Entrepreneurs must pay close attention to the risk categories outlined and ensure they are ready to comply with the proposed regulations to protect EU citizens’ safety and fundamental rights.