State of the art of AI… This notion inevitably recalls Hans Moravec’s illustration of ‘landscape of human competence’ where elevation in the hilly landscape represents the difficulty for computers in achieving human competencies. Advancing AI technologies are like water slowly flooding the landscape and anticipated to reach mountain peaks where we feel safe for now. One day AI is likely to submerge the entire land and outperform us all. The question is: How would you start preparing for what is coming ahead? Would you prefer to act as a digital utopian and grasp a lifebuoy? Or would you be realistic and rely on the rule of law?
Opportunities and benefits presented by AI are endless, but risks and threats imposed by the very same are undeniable. Considering the severe implications of biased systems on human rights and the safety of people, a level of legal protection is mandatory. To this end, the EU strives to ensure that AI technologies are developed and functioning in harmony with fundamental rights and principles, thus revealing the notion of 'Lawful AI.' To address risks associated with specific uses of AI, the Commission first published the ‘White Paper on AI’ to set out an ethical guideline for trustworthy AI. Since the approach was instead a suggestive instrument with no binding legal force, on 21 April 2021, the EU Parliament adopted the ‘Proposal for a Regulation on Artificial Intelligence Act’ (the ‘Proposal’). This is the first-ever legal framework imposing significant regulatory compliance obligations on actors engaging with AI systems while encouraging AI innovations. Corresponding to GDPR's role in data governance, the Proposal will have a crucial role in shaping how AI is developed and deployed in the EU and will serve as a blueprint for legislative actions in other jurisdictions, including Turkey.
The Proposal applies to providers and users of AI systems for all sectors inside and outside the EU. As long as the AI system is placed on the EU market, the output produced by the system in the EU covers, among other things, end-product manufacturers, importers, and distributors across the AI value chain. Any non-compliance with the Proposal will be subject to high penalties (i.e., fines up to a maximum of €30m or 6% of the total worldwide annual turnover of the preceding financial year of the obligator).
Prohibited AI Practices
The Proposal adopts a risk-based approach in categorizing AI systems and thus allocates liabilities on a sliding scale basis. Accordingly, these categories are (i) Prohibited AI practices (unacceptable risk), (ii) High-risk AI systems, and (iii) Limited-risk AI systems, and (iv) Minimal-risk AI systems.
Self-evidently, prohibited AI practices are allowed under no circumstances. AI systems deploying subliminal techniques to influence human behavior (manipulative applications), AI-based social scoring systems, and the use of real-time remote biometric identification systems (RBIS) in publicly accessible spaces for law enforcement are banned.
High-Risk AI Systems Subject to Strict Requirements
The Proposal contains specific rules and restrictions for using high-risk AI systems that can briefly be defined as systems that create a ‘high risk to natural persons' health and safety or fundamental rights. In essence, AI systems in Annexes II and III of the Proposal shall be considered high-risk and subject to strict requirements. The numerus clausus list includes AI used for biometric identification and categorization of natural persons and AI used as safety components in managing and operating critical infrastructures, such as the supply of utilities. From a financial perspective, we also use AI in employment, workers management, and access to self-employment, including the use in recruitment, task allocation. More importantly, the creditworthiness of individuals or their credit score is assessed by AI.
Obligations of Providers and Users of AI Systems
Providers of high-risk systems shall be responsible for certain obligations, in what follows: maintain a risk management system; meet quality criteria for training, validation, and testing of data sets. They also provide technical documentation to demonstrate that high-risk AI system complies with laws and keep the logs automatically generated by the system. Moreover, providers must establish a quality management system in written policies and procedures, run ex-ante and ex-post conformity assessments, and comply with necessary registration notification obligations to regulatory bodies.
Similarly, users of high-risk AI systems must control the relevancy of input data of these systems to the extent they exercise control over the data; monitor the operation of the system; keep logs automatically generated by the system and; carry out data protection impact assessment.
Regulatory obligations will endure throughout the entire lifetime of a high-risk AI system, starting from its design phase to its course of operation.
Limited-Risk AI and Minimal-Risk AI Systems
Emotion recognition systems (ERC), biometric categorization systems (BCS), chatbots, and deep fake are listed as low-risk AI systems under the Proposal. Providers and users shall meet transparency obligations, namely, enabling users to interpret the system’s output and use it appropriately.
Although the Proposal does not officially list minimal-risk AI systems, these are detailed in the Commission's Q&A deed. In this regard, most AI technologies will belong to the minimal risk category and be subject to no restrictions.
Impact of the Proposal on Financial Services
Financial institutions must comply with the above obligations should they use AI systems to evaluate creditworthiness or establish credit scores of customers that fall within the scope of high-risk AI systems. The exact requirements must be met when AI systems are used for recruitment purposes (i.e., advertising vacancies, screening applications) and making decisions on promotion and termination of work-related contractual relationships. Furthermore, transparency obligations will also apply to the use of chatbots by financial institutions.
Conclusion and Next Steps
The Proposal creates a certain minimum standard for regulating AI while presenting the first official definition of it. However, there are issues left open, and shortcomings of the Proposal need to be reconsidered before the Proposal enters into force. In the meantime, institutions can think of specific measures today by running an assessment on the likely impact of the Proposal on their business and develop necessary AI compliance and governance mechanisms in advance. At this stage, forming an interdisciplinary team comprising software developers, data engineers, ethicists, and legal experts to audit all phases of application and implications of AI systems used within the business might be productive.
Prof. Dr. Cenktan Özyıldırım
European Commission, ‘White Paper on Artificial Intelligence- A European Approach To Excellence and Trust’, COM(2020) 65 final, 2020. https://ec.europa.eu/info/sites/default/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf
European Commission, ‘Proposal for a Regulation of the European Parliament and the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts,' COM(2021) 206 final, 2021.
Nemitz, P. 2018. ‘Constitutional Democracy and Technology in the Age of Artificial Intelligence,' Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376(2133).
Smuha, E. et al. 2021. ‘How the EU Can Achieve Legally Trustworthy AI: A Response to the European Commission's Proposal For An Artificial Intelligence Act,' available at SSRN: https://ssrn.com/abstract=
Tegmark, M. 2017. Life 3.0: Being Human in the Age of Artificial Intelligence, Vintage Books, New York, 52-55.