Find out more about the European Commission's proposal for a framework for artificial intelligence. This legislative framework aims to ensure responsible use of AI in Europe, with rules adapted to the risk, while supporting innovation. Global regulation to prevent technological aberrations.

The European Commission recently unveiled a proposal to regulate the use of artificial intelligence (AI) in order to prevent abuses. This project is based on a proportional approach, taking into account the risks associated with each use of AI. The aim is to create a favorable environment for innovation, while protecting the security and fundamental rights of European citizens.
This proposal aims to establish a legal framework similar to that of the RGPD, to ensure that AI is used in a trustworthy manner. It will apply to EU countries, but also to Switzerland, particularly for companies with trade relations with the EU. Switzerland could draw inspiration from this regulation to develop its own rules, as indicated in the guidelines set by the confederation at the end of 2020.
The EU is well aware that over-regulation could stifle innovation, particularly in the face of competition from the USA and China. This is why a risk-based approach is favored. Systems presenting an “unacceptable risk” will be banned, such as real-time facial recognition technologies used by law enforcement agencies, or social rating systems similar to those developed in China.
Conversely, AI systems presenting minimal risk will not be subject to any additional legal obligations. However, their suppliers will be able to choose to voluntarily adhere to codes of conduct.
The regulations focus mainly on some twenty uses deemed to be high-risk. These include the use of AI in human resources (recruitment, promotions, productivity assessments) or in public administration to determine eligibility for social benefits. AI used to assess the creditworthiness of individuals is also a high-risk system.
Companies or public authorities developing or using these high-risk applications will have to comply with strict requirements. Suppliers will have to follow an assessment process, which includes the use of good data governance practices, technical system documentation, and mechanisms to record and monitor the operation of AI systems.
Users will need to ensure that the data used is relevant, and check that the systems are operating correctly according to the instructions for use.
The cost of compliance varies according to the stakeholders involved. For suppliers, the assessment of a medium-risk AI system would be around 6,000 to 7,000 euros, while human oversight of AI systems for users could cost between 5,000 and 8,000 euros per year.
While Europe is moving ahead with AI regulation, the USA is also following this trend. The Federal Trade Commission (FTC) recently published guidelines to prevent discrimination generated by biased algorithms. It warned providers that the responsibility for algorithm performance lies with them, and that it could take legal action in the event of discrimination or deception.
Thus, AI regulation is about to become a global issue, with legislative initiatives underway in both Europe and the USA.
Source : ICTjournal