The European Commission has proposed and presented the Artificial Intelligence Act (AI Act), representing a significant milestone in the field of artificial intelligence and placing the European Union in a leading position in the regulation and innovation of the AI sector for the coming years. This proposal aims to standardise regulations for the development, market entry, deployment and adoption of AI, while simultaneously addressing the risks associated with it. The AI Act proposes a classification of risks and represents one of the first relevant steps to address the social and ethical challenges posed by artificial intelligence.
Key points of the presentations are:
I) Involved actors: - Operators of AI systems based in the European Union. - AI system operators based outside the European Union, if they operate AI systems within the European Union. - Suppliers bringing to market or commissioning AI systems outside the European Union, but which are also used within the European Union. II) Sanctions in the event of non-compliance: The IA legislation establishes a clear liability system in the event of non adequacy. Therefore, it is crucial to comply with this legislation since violations can result in significant penalties. The legislation operates on three levels of violation, with fines specific fines depending on the type of violation.
AI act: banking application
III) Players: Provider, User, Importer, Distributor, Product manufacturer IV) Risk classification: The Artificial Intelligence Regulation proposes a risk-based approach to consider and remedy the impact of artificial intelligence systems on the fundamental rights and security of users. The proposed approach has different requirements for each level. The three levels are: Unacceptable, High Risk, Lower Risk V) Other models of artificial intelligence included in the AI ACT but outside the classification (Impacts to be assessed): Artificial intelligence systems used for data collection and (predictive) analysis, Artificial intelligence systems used for process automation, Artificial intelligence systems used for operational efficiency and/or data-driven decisions, Document processing automation, Machine translation, Pricing and risk management decisions VI) Determining factors identified in AI ACT: Defining an adequate Risk Management System, Training, data validation and testing, Accuracy, robustness and security VII) Ensure compliance when AI systems are in operation with respect to the design phase: Establish governance, Assess risks, Create an AI ethics committee, Assign responsibilities, Promote awareness
For further informations, please contact: