SmartSim
Education and Artificial Intelligence
This study aims to delve into all aspects of education and vocational training as described in the sources, primarily within the context of Regulation (EU) 2024/1689 (the AI Act). The EU Artificial Intelligence Regulation addresses AI in education in two main areas: as a sector that can benefit from the use of AI, and as an area where certain AI systems pose a high risk to fundamental rights and are therefore subject to strict requirements.
AI Opportunities in Education
AI is recognized as a set of technologies that contribute to generating benefits in various sectors, including education and training.
Specifically, the deployment of AI systems in education is important for fostering high-quality digital education and training. The goal is for all students and teachers to acquire and share the necessary digital skills and competencies. This includes media literacy and critical thinking to actively participate in the economy, society, and democratic processes.
Classification of AI Systems in Education as "High Risk"
Despite their potential benefits, the Regulation stipulates that certain AI systems used in education and vocational training must be classified as high-risk. This classification is justified because these systems can determine a person’s educational and career path and, consequently, affect their ability to secure their livelihood. If not designed and used correctly, these systems can:
Intrude upon and violate the right to education and training.
Violate the right to non-discrimination.
Perpetuate historical patterns of discrimination, for example, against women, certain age groups, people with disabilities, or people of a certain racial or ethnic origin or sexual orientation.
Anexo III de la Ley de IA
Annex III of the AI Act, point 3, lists the categories of AI systems considered high-risk in this area:
AI systems intended to determine access or admission:
AI systems intended to be used to determine the access or admission of individuals to educational and vocational training establishments at all levels or to allocate individuals among such establishments.
AI systems intended to assess learning outcomes:
This includes cases where such outcomes are used to guide the learning process of individuals in educational and vocational training establishments at all levels.
AI systems intended to assess appropriate educational attainment:
AI systems intended to be used to assess the appropriate level of education that an individual will receive or be able to access, in the context of educational and vocational training establishments at all levels. AI systems for monitoring behavior during exams: AI systems intended to be used for monitoring and detecting prohibited behavior by students during exams in or within educational and vocational training centers at all levels.
Specific Prohibited AI Practices in Education
The Regulation explicitly prohibits certain AI practices in education if they are deemed to pose an unacceptable risk and contravene EU values.
The introduction, deployment, or use of AI systems to infer an individual’s emotions in educational settings is prohibited. This prohibition takes into account the power imbalance in the educational context, coupled with the intrusive nature of these systems. The use of such systems could lead to detrimental or unfavorable treatment of certain individuals or entire groups.
The only exception to this prohibition is when the AI system is intended to be installed or placed on the market for medical or security reasons.
IV. Protection Measures and General Requirements
Since these systems are classified as high-risk, both providers (developers) and deployers (end users, such as educational institutions) must comply with the stringent requirements set out in Chapter III, Section 2 of the Regulation, including:
AI Literacy:
Deployers must ensure that staff operating high-risk AI systems in the workplace, including educational institutions, have the necessary competence, training, and authority. This involves understanding the risks and benefits of AI, as well as possessing the knowledge required to make informed decisions.
Risk Management:
They must establish and maintain a continuous risk management system. This includes identifying and analyzing known and foreseeable risks that the system may pose to fundamental rights.
Data Governance and Bias:
Strict data governance is required. Training datasets must be relevant, representative, and as free as possible from errors and biases. This is crucial to mitigate potential biases that could perpetuate and amplify existing discrimination against vulnerable groups.
Human Oversight:
Systems must be designed to allow effective oversight by natural persons, enabling the interpretation of outputs and the ability to intervene at any time or to disregard the results generated by the system.
Conclusion
Implementing these systems in education requires a delicate balance between technological innovation and protecting students’ fundamental rights. If you would like to learn more about the documentation requirements or the human rights impact assessment that those responsible for deploying these systems must carry out, I can provide you with further details.