The AI Act’s regulatory goals are undeniably commendable: Besides the free movement of AI-based goods and services, the AI Act shall ensure a high level of protection of health, safety and fundamental rights. However, AI technology and the AI ecosystem are still in their infancy which turns the object of regulation into a moving target as well as prone to over-regulation. The difficulties and costs of adapting to and implementing the general data protection regulation serve as a warning example.
1. How the AI Act Took Shape: A Timeline
When the AI Act was first proposed by the EU Commission in April 2021, it introduced a risk-based approach, dividing AI systems into four categories ranging from minimal to unacceptable risk, based on the potential harm the system may cause. The higher the risk the stricter the requirements and obligations for the providers of these systems, ranging from labelling obligations for chatbots and AI‑generated texts and images to detailed risk assessments, market monitoring obligations and CE certification. As overarching principle, the AI Act tries to limit the autonomy of AI systems and put control over decision making processes back in people’s hand: Human oversight of AI systems, explainability of AI‑driven decision making, and a “kill switch” that allows to stop and override any decision made by the AI system by human supervisors.
2. ChatGPT: A Cause for Revision
When looking into the material scope of the AI Act you will find that almost any industry from automotive, to the judiciary, and health care will have to abide by the new regulation. However, what initially didn’t spark much debate was the regulation of so-called General Purpose Artificial Intelligence (GPAI). These are AI systems that do not fall clearly within one category but may be used for several deployments, such as ChatGPT. Obviously, ChatGPT could be used to create content for online marketing (low risk system) but also aide as a writing assistant for judges and lawyers (high risk system). Likewise, so-called Foundation Models, which are trained on a vast number of training data, are often open-sourced and may be used by AI developers for free, have not been considered within the first draft of the AI Act.
Currently, a fierce debate is ongoing on the regulation of both GPAI and Foundation Models. Whilst both systems add substantial economic value by allowing a multitude of down-stream use cases, the legislator struggles to find the right balance between information obligations, liability for third-party uses and not suffocating the innovation potential of these technologies.
3. Impact on the AI Industry
The declared goal of the AI Act is to build an AI industry in Europe that will become the worldwide beacon for responsible and trustworthy AI.
A market study of the appliedAI Initiative asked startup founders in Germany which risk category of the AI Act their business cases would fall into. It turned out that 33% of founders think their products would be classified as high risk. And 50% believe that the AI Act will slow down innovation in the EU and 16% even consider ceasing developing their AI product or relocating outside the EU. This tendency has been confirmed by several startups, that we spoke to. We know cases, for example, of US investors luring promising companies to settle in the US in order to avoid legal uncertainties surrounding the regulatory framework in Europe.
On the other hand, we learned from partner firms and industry experts in Israel that Israeli startups closely monitor the development in Europe and strive to comply with the upcoming AI Act to maintain access to the large European market. Apparently, the Commission’s objective to turn the EU market into a leading example of AI regulation already gains traction. Presumably, this effect is supported by the broad scope of applicability of the AI Act that not only covers companies located in the EU but also those that want to provide services as cloud or software-as-a-service providers to customers within the EU. The Commission’s idea was to ban any attempt of outsourcing AI services to non-EU countries with a lower level of AI regulation.
Turns out, with wise regulation, the EU AI Act could become a selling point for the European AI industry and a seal of approval for foreign companies.
4. Call for Action
Again, the goals of the Commission’s proposal are well justified. However, a wish list to the legislator would contain at least the following aspects:
- AI regulation must not dampen innovation and dissemination of AI technology. In Europe we face an aging population and labor shortage. If we want to maintain our prosperity, we need to increase the efficiency of our workforce.
- AI regulation must consider the valuable contribution of startups to our economy. Without smooth transition from world class university research to market readiness of spinout startups, we will lose a substantial share of our innovation potential. Founding a company in Europe and particularly in Germany is already challenged by an abundance of regulation. The legislator must therefore provide sufficient regulatory leeway to allow startups to invest time and resources to work on their businesses before considering all granularities of AI regulation.
- The regulatory approach of the AI Act is innovative in itself when it comes to the establishment of so-called regulatory sandboxes, public institutions that provide startups with guidance, counseling and safe harbors to test their products. However, these sandboxes must be equipped with sufficient financial and personal resources to become credible partners within the AI industry.
Do you want to stay updated on the most important news and trends within the field of AI? Don't miss to subscribe for our LinkedIn Newsletter "AI News to Go"!