Accessibility:

Statement on Artificial Intelligence – BM85

30.01.2024
Share it:

I. General remarks about artificial intelligence and higher education 

This statement outlines ESU’s position on Artificial Intelligence (AI) in higher education,  aiming to articulate the students’ perspective on what impact it may have, how it could  be put to good use, which are the currently existing pitfalls or worrisome developments  and how they could be mitigated. In this document the artificial intelligence system’  (AI system) means a machine-based system that is designed to operate with varying  levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual  environments. 

Since the groundbreaking developments in the generative AI in 2022 and their popularisation, discussions related to the overarching impact and role of artificial intelligence in society and in higher education have gained prominence, even  outpacing the actual use of AI by students and teachers and its impact in classrooms. In some cases, the conversations between those affected and the creation of rational answers have been replaced by hasty decisions on ‘banning’ the use of AI by students in a pursuit of ‘preserving’ the academic integrity, placing students as culprits.  

However, the incremental integration of Artificial intelligence tools in (higher) education is not a new phenomenon. By 2019, the governments already agreed to a general set of principles related to AI and education, within the ‘Beijing consensus’.  

The lack of adequate information about what AI is and what is not has been leading to hazardous interpretations of its impact, that spans across the range from seeing AI as  a universal solution to all higher education problems and its full-scale transformation  to a dystopian alternative of an AI takeover. 

ESU adopts and advocates for a balanced, nuanced, and pragmatic approach to  Artificial Intelligence, that puts the student and its interests in the centre. As a neutral tool, support in deploying AI for enhancing adaptability, innovation, accessibility, and  quality should come in hand with a precautionary approach towards its pitfalls and  how they can be prevented.  

ESU acknowledges the disparity of opportunities in relation to the integration of AI. While resourceful systems which allow for advanced infrastructure are increasingly discussing the wide integration of AI in their learning and teaching or research activities, other systems are far behind. In this sense, we highlight the importance of  

equitable funding to ensure that all higher education institutions and students have  the digital equipment, software, and knowledge, which should be provided in courses,  implemented in the study process, to use AI safely, critically and constructively. 

It is important that all students have access to the most relevant AI tools for their fields  of study. A specific pitfall could be the potential inequalities following diverse costs of  different AI tools. Certain auxiliary AI tools which increase performances is behind a  paywall, thus increasing the financial price and creating potential inequality. Therefore,  ESU thinks it is important, that HEI take responsibility for securing this access for all students no matter their financial resources. Without this access AI will increase the  inequality in education. 

While ESU believes that AI will not fundamentally alter the principles of higher education  in the medium-term future, it could nevertheless bring meaningful support to the  learning and teaching process and the higher education policies in general if adequate strategies and measures are in place. The rapid growth of AI also calls for resilient systems that could both take advantage of and also be adaptive to disruptive  developments, while protecting students. 

The development of AI is leading to changes in frameworks in a variety of areas:  whether for educational institutions, youth activities, sport, health, culture, or fight  against fake-news, it is important to emphasise that thinking about the place of AI in  society needs to be carried out in a comprehensive and coherent way across the  different areas, including higher education. 

The following sections delve into how and to what degree AI integrates into the higher  education reality, from its concrete impact in classrooms to overarching issues that  need to be taken care of, such as AI use in decision-making, transparency and  accountability of AI and its impact on human rights and discrimination.  

II. Artificial intelligence and learning and teaching policies 1) Role of AI in supporting innovative and student-centred education 

ESU believes that Artificial Intelligence cannot replace human interaction in learning  and teaching, but can serve as a helpful add-on, both in classes and in the tasks  assigned to students, in order to support student-centred learning and innovative  methods of teaching.  

The first element to be considered is in learning with AI. Through personalised interactions and using AI for support in solving tasks and doing routine work, visualising  data or other resources, compiling data or critically analysing input, students can  develop skills both related to the study field and transversal skills, while keeping in mind  that AI is not a tool for just students but also teaching staff. 

Using AI as an ‘interlocutor’ for assignments or other tasks can develop critical thinking  skills. Furthermore, through assistive learning tools, AI can enhance the inclusion of  disadvantaged groups, especially Disabled students. AI can also support teachers,  acting as assistants in creating learning resources and support in pedagogy, analyse the curriculum to flag for inconsistencies or other issues or support writing learning  outcomes.  

AI can also enhance experiential learning, where students construct their own knowledge by exploring the learning environment. Based on knowledge tracing and machine learning, AI can offer feedback and guidance to students. Coupled with virtual reality or extended reality, AI can create an accessible and comfortable space  

for students to test their knowledge and to gain practical experience with tools and in  places that could otherwise be inaccessible or unavailable for the learners.  

AI can personalise the content to match student’s specific needs, ensuring an optimal learning experience, and it can adjust the format and delivery method of educational materials to match the preferred learning style. AI can be of help in analysing and updating curricula, ensuring to give the students latest information, and raising the knowledge base. However, it must be kept in mind that the provided information is not necessarily always accurate and should always be checked. 

Irrespective of whether used by learners or by teachers, learning with AI can happen only if both are adequately trained to understand what AI is, its possible functionalities  and limitations and how to implement it in learning and teaching successfully.  

There are different approaches to using AI depending on whether there is a live human mediation or support given to the learner or if the learner uses AI separately. Furthermore, different adaptations for using AI should be observed if the study programme or the learning experience is fully online. 

Conversations should take place in the higher education communities on how to make best use of available technology related to AI. As evolutions in Artificial Intelligence are  constantly taking place, these consultations should be frequent and based on the  most recent available information. This requires open and inclusive planning and  decision-making on creating strategies for improving pedagogy, student- teacher  interaction and student centred-learning through AI.  

Not all possible uses of AI are applicable in all study fields. While some functionalities,  such in languages and translation, can prove useful for all study fields, others are  applicable only to certain programmes. Understanding what AI tools bring added  value is a matter of the institutional or even classroom context, and while staff should  be trained in using AI, they should have the flexibility to use AI or not.

In addition, HEIs must have transparent guidelines and processes in place for the acquisition, implementation, use, and quality assurance of AI software. Students and academic staff must be involved in these processes in order to legitimize, build trust  and ensure the appropriate use of AI software. 

Furthermore, AI should be used mainly to increase learner agency. Creating rigid patterns in which AI decides on strategies for learning or teaching cannot be acceptable as this would actually diminish student-centred learning instead of supporting it.  

AI can help to transform education from a one-size-fits-all approach to one that is tailored to the unique needs and goals of each student, ultimately enhancing engagement, performance, and the overall quality of education, but only if it is understood and applied properly.  

2) Learning about AI 

The second element to be considered is learning about AI. As the artificial intelligence’s  use in society and economy will only increase, higher education should provide  meaningful knowledge and skills to all students and staff to understand it impact and  how it will be applied more widely. This is essential especially considering how  disruptive the technology can prove to be in various fields and how it can affect  everyone. In this sense, learning about AI’s underlying concepts, the human-AI  interaction and biases is crucial. 

Basic skills of using AI needs to be developed in every learner as well as AI can be used  as the support tool in implementing different methods of teaching and learning. The  proper use of AI is one of the 21st century skills which enables learners to develop  personally and to access the labour market more freely and easily. Students should be  provided modules in which they could learn about the usage of AI and how it functions. 

Digital literacy and knowing how to use AI is already a skill the labour markets are asking for, in almost all fields, since it’s already integrated in normal work-life. Teaching  and testing should follow this development, and by not allowing AI in any way or not  integrating it in higher education, students’ competitiveness when applying for jobs will  be damaged. It is therefore of the essence that higher education makes sure  everybody is able to acquire digital skills during their education, as HEI have a  responsibility to make sure their students can compete in what is a digitalised world  with a digitalised labour market after they graduate.

3) Role of AI in guidance and support systems in general 

As AI has made inroads into the field of education, it can potentially transform how students are guided and supported. AI can be used to enhance the educational experience by supporting personalised learning and data-driven guidance.  

HEIs need to support the effective use of AI among students, academic and administrative staff, especially by providing specific training to teaching staff in order  for them to be able to use AI to support assessing individual student needs and propose tailored learning paths to students, ensuring that everyone receives education that suits their learning style. 

AI-driven systems analyse student performance and adapt content in real-time. They offer additional resources and challenges for those who have gained the learning outcomes, while providing more support to those who need it. By analysing student performance data in compliance with data protection laws and students’ rights, AI can flag students who are experiencing difficulties, that may also lead to drop out on the  

long term, allowing educators to better evaluate their learning experience and provide  additional help or resources, if needed. These actions should always be made with  consideration of students’ wellbeing in mind. 

AI can be one of the supporters in implementing student-centred learning in terms of personalised feedback, as AI-supported grading and assessment systems, if well implemented, can offer detailed feedback, helping students identify and understand their mistakes and giving suggestions for improvement, which also supports each learner to have a personalised study path.  

ESU believes that the proper use of AI by HEIs can create a strong support and guiding  system in implementing different teaching and learning methods, updating curricula  and programs, creating personalised learning paths and personalising the feedback  system.  

Furthermore, AI can be used also for supporting academic and career counselling of students through evidence, data, showing different choices, while it cannot replace human academic counsellors or tutors. 

4) Assessment policies 

Decisions about using AI technologies in assessments are typically made based on various factors, including concerns about cheating, AI’s ability to disguise its origins for  content, maintaining the integrity of the examination process, and ensuring fairness  for all students. Institutions may also have concerns about the potential for students  to access unauthorised information during exams, which could raise ethical issues. 

If these factors are indeed entrenched within our institutions and play a significant role  in upholding the integrity of the assessment process in our collective consciousness,  they are indicative of an outdated approach to evaluating learners that has persisted  for centuries and needs change. 

Across all educational institutions, curricula naturally evolve and are regularly re evaluated, varying by country. We no longer study the same subjects because human knowledge is constantly advancing in every field, reflecting the changing needs of our  society. 

While assessments based also on knowledge remain crucial in many subjects, it is essential to reconsider assessment methods that are vulnerable due to AI. If a subject  can be evaluated using a methodology replicable by AI and still maintains relevance,  the question of exploring new evaluation methods has to be asked.  

This particular issue sparks a vital debate on whether students are assessed effectively. The traditional narrative in which students memorise and then recall knowledge or facts in most cases is clearly outdated, but the fears of AI show that it is  still present. Consequently, the paradigm shift in education offers a valuable opportunity to finally adopt a learning-outcomes based approach in education.  

Similar to the transformative impact of the internet and search engines, AI’s opportunities should be put to good use. The focus should not be on banning the internet or AI if assessments can be conducted online but rather on evaluating the relevance of a test when answers are readily available through these tools. 

The same applies for research assignments where the original input from students is not expected. While AI should be allowed for compiling data, as long as its use is properly mentioned in detail, the assessments should look into students’ skills to critically analyse it and draw conclusions upon it.

Considering AI as a tool, it is crucial to address issues of accessibility and equal opportunity. ESU believes that no AI tools should be prohibited in assessments where other technological aids such as computers and the internet are allowed. Different AI tools can, for example, be selected and recommended by professors according to their usefulness in completing the specific assessment. However, this condition should apply only if the AI tools remain openly accessible to the entire assessed population,  

free of charge and that AI is used as a tool, not as a replacement for the assessments. 

The arrival of AI in education is also an opportunity to improve assessments and find new ways of making evaluation of learners more relevant and less stressful. AI can analyse vast amounts of data to increase the understanding of each student’s  learning style, strengths, and weaknesses. By tailoring educational materials and  assessment to individual needs, AI can enhance personalised learning experiences,  allowing students to progress at their own pace. 

AI-powered assessments can provide instant feedback to students, with proper supervision and validation by teaching staff. This rapid feedback loop allows learners  to understand their mistakes immediately, enabling them to learn from errors and  improve their performance more effectively. This capacity also opens the door to continuous assessment that could tackle the vast majority of biases and drawbacks  of final assessment methods.  

AI can improve accessibility for students with disabilities. Text-to-speech and speech to-text technologies, powered by AI, can assist students with visual or auditory  impairments, ensuring that they have equal access to educational materials and  assessments. Those technologies need to be monitored by human resources and  made accessible, by giving extra time for example, but should not replace the current adaptive methods that impaired students benefit from.  

It is essential for every country to contemplate these opportunities within their  digitalization strategies and incorporate them when applicable. It’s crucial to acknowledge that AI policies should be universally applied across various subjects but  should also be tailored to specific subjects as per their unique requirements and challenges. This balanced approach ensures a comprehensive integration of AI technologies in education, addressing both overarching principles and subject specific nuances. 

5) Training on the critical use of AI  

Being trained on the critical use of AI is important for several reasons, including in  education, for both learners and teachers. AI systems can inherit biases present in the  data they are trained on. Critical training helps users recognize these biases and make  ethical decisions, ensuring that AI is used in a fair and just manner. Biases in AI’s data  training sets are the main entry to discrimination based not on the AI technology but  on our own bias.  

AI often relies on vast amounts of data, so users need to be critical about data collection and usage to protect privacy and prevent unauthorised access or other data breaches, especially when dealing with sensitive information. It’s also important  to understand how AI models are trained. The basic concepts of AI need to be taught.  AI has limitations and is still not reliable on many points, including on comprehending context and emotions.  

AI technology, particularly in natural language processing, can generate human-like text, making it capable of creating articles, stories, and even fake news. There have been instances where AI-generated content was misused to spread false information.  

Critical thinking is essential to combat this issue, not only as student or teacher, but as  individuals. By being trained to assess the credibility of AI-generated information,  individuals can scrutinise the source and cross-verify facts. They can question the  authenticity of the content, especially if it lacks proper citations, references, or a known  publication source.  

In essence, being trained to use AI critically empowers individuals and organisations to harness the potential of AI while mitigating risks and ensuring that its deployment  aligns with ethical and societal values. AI critical thinking needs to be taught the same  way how to use the internet is taught in many schools nowadays. The important  difference is the time scale: the internet spread in a few years, while AI is already  accessible on any device and thus its potential harms can be spread more easily.  

6) Quality assurance and AI 

Development of AI has highlighted the importance of making some changes in the QA procedures. It is important to rethink mechanisms and criteria and ensure they are also  applicable to the use of AI. The main challenge is to implement the proper use of AI in education which supports the development and quality of education, based on proven  pedagogical research and methodologies.  

In order to fully realise the benefits and mitigate potential risks of AI, it is crucial to  establish robust quality assurance systems that cater to the multifaceted human,  social, and ethical dimensions of AI implementation across three core components:  data, algorithmic analyses, and educational practices. 

In terms of data quality and ethical issues, QA criteria need to reflect compliance with  data protection regulations and ensuring that data sources are transparent and ethically obtained, aligning with the institution’s ethical standards. 

We emphasise that QA mechanisms need to be adapted in terms of algorithmic transparency, providing clear explanations of how decisions are made and ensuring students and educators can understand and trust the system’s recommendations. 

Ongoing monitoring to detect and mitigate biases and inaccuracies in algorithmic  outputs, with a focus on fairness and equitable treatment is important as well as  adherence to ethical guidelines and principles to prevent the negative impact of AI on  students and their educational experiences. 

Quality assurance should look into how AI is incorporated in curriculum design, assessment, and course content to ensure that AI aligns with the educational goals and values, however quality standards should not expect by default the use of AI. 

Ethical and pedagogical oversight need to be the part of the processes to ensure that AI enhances, rather than replaces, the role of educators in guiding and mentoring students. Furthermore, QA could also be supported by AI through algorithms of  interpreting the vast amount of data that is used in QA evaluations and support  analysis.  

III. Artificial intelligence and higher education policies  

1) Use of AI in decision-making 

Using AI in decision-making processes offers numerous advantages, including increased efficiency and speed. AI’s ability to process vast amounts of data quickly  can lead to more efficient decision-making, particularly in complex scenarios. 

Additionally, AI algorithms excel at data analysis and prediction, identifying trends and  making accurate forecasts that could be used in education. AI implementation can result in substantial cost reduction by automating tasks that would otherwise require human labour.  

However, the use of AI in decision-making is not without challenges. One significant concern is the potential for bias and unfairness. AI systems currently inherit biases present in the data they are trained on, leading to discriminatory decisions in sensitive  areas like hiring and management of humans in a more general way.  

Moreover, in the current state of developments, AI lacks true contextual understanding  and the ability to interpret human emotions accurately. Consequently, decisions made by AI lack the nuanced understanding that humans possess, leading to potentially  misguided conclusions. Furthermore, the use of AI and data analytics can be  presented as neutral, while political elements can come into play in relation to how the  questions are formulated or the datasets they use, can significantly worsen the biases  and unfairness of AI systems. 

Over reliance on AI poses critical risks, such as the erosion of human skills like critical  thinking and problem-solving, as well as security vulnerabilities, which is not tolerable  in education. The decision to integrate AI into decision-making processes necessitates  careful consideration of context, ethical implications, and potential biases.  

AI can only support decision-making and in no case whatsoever replace it. ESU believes that no AI should be used without human supervision for decision-making in general, and that it should be banned from any kind of human resources decision making, especially in education. 

2) Transparency and explainability of AI 

While efforts are being made to enhance the transparency of AI algorithms, the intricacies and complexities involved in neural networks, which are fundamental components of many AI systems, pose significant challenges in terms of complete accessibility and readability. 

Meanwhile, believing that complete transparency in algorithms would resolve all issues related to AI is also unrealistic. This assumption rests on the idea that humans  are already trained to comprehend AI, which is far from the reality.

AI transparency is not only about algorithms transparency, but also in everything around the algorithm, including its choice. Providing context about the data used, the  problem being solved, and the limitations of the AI system helps users understand the  scope and boundaries of the technology, fostering transparency. It is also very important that the process and the development of AI takes into account the social aspect of decision-making and information presentation as well as ethical norms. 

Comprehensive documentation is also a key element for transparency.  Documentation that outlines the algorithms, data sources, training methods, and  validation processes used in the AI system promotes transparency. Detailed  documentation enables peer review and scrutiny. Even if documentation is not meant  to be widely accessible to the public, the accessibility of technical details allows  expert-criticism on tools that are too complex to be assessed by non-trained users.  

Explainable AI is essential in higher educational institutions as it promotes transparency, accountability, and trust in the decision-making processes facilitate by  artificial intelligence. In an educational setting, where faculty, administrators, and  students rely on AI-driven tools for tasks ranging from personalized learning recommendations to administrative decision support, understanding how and why AI arrives at specific conclusions is crucial. 

Explainability fosters a deeper comprehension of AI outputs, enabling educators to better interpret and utilize the insights generated by these systems. This not only enhances the educational experience by tailoring content to individual needs but also allows educators to identify and address potential biases in algorithms. 

Moreover, in an academic environment, where ethical considerations are paramount, explainable AI empowers stakeholders to assess the fairness and integrity of AI applications, thereby ensuring responsible and equitable use of technology in higher education. 

Adherence to legal and regulatory standards related to transparency, explainability and accountability is also essential. Regulations such as General Data Protection Regulation (GDPR) or the AI Act in Europe mandate transparency in automated decision-making processes, ensuring user rights are protected. ESU believes that the transparency and explainability of AI should be viewed overarchingly and not only as an algorithm-related matter and specially making sure that any use of AI in education, assessments or decision-making is fully disclosed.

3) AI and its implications in intellectual property and academic integrity  

The growing adoption of AI poses challenges to the credibility of academic credentials.  With AI advancements, fabricating counterfeit diplomas or records has become  simpler and less trackable. ESU strongly advocates for concrete steps to safeguard the  legitimacy of educational qualifications, using not only paper but also a provable  digital alternative.  

Higher education institutions and credential evaluation bodies should implement  effective measures to better prevent falsification. Currently, higher education  institutions are encountering mounting difficulties, and the assistance and guidance  from ENIC (European Network of Information Centres in the European Region) and  NARIC (National Academic Recognition Information Centres in the European Union) 

networks are undoubtedly essential. 

AI technology raises a considerable number of challenges and opportunities related  to intellectual property. On one hand, AI has the potential to revolutionise content creation, rendering traditional copyright and patent frameworks ill-equipped to address the unique issues that arise. Moreover, the fast speed of the dissemination of  content in the digital age poses difficulties in protecting intellectual property rights.  

When it comes to addressing the challenges by policies, ESU thinks that in regard to  education, AI should be considered as a multi-facet tool and neither as a source, nor  as a content-owner. This multi-facet aspect of the use of AI justifies considering AI as  a tool that should be cited when handing over a written assessment in education for  the sake of transparency, reproducibility, and tracing of methods.  

Nevertheless, ESU foresees the usefulness of AI, as an aid in the detection and enforcement of intellectual property violations. As AI continues to move forward, the legal and ethical implications surrounding intellectual property rights in an AI- driven  world will require careful consideration and new regulatory frameworks.  

Serious attention must be given to addressing concerns related to plagiarism and copyright infringement. On one hand, AI-powered plagiarism detection tools can be extremely relevant in identifying instances of plagiarism and copyright infringement, allowing for more effective enforcement of intellectual property rights. However, the proliferation of AI-generated content complicates the landscape. Firstly, it is impossible for now to detect where AI-generated material is used without proper citation or attribution. ESU underlines the need for evolving citation standards and copyright regulations to tackle the academic integrity blurred lines as currently students unintentionally can break copyrights and still be liable for it even though it  was based on AI. 

Currently, there are no AI tools that can consistently provide the source of information,  underscoring the importance of AI development reaching a mature stage. A comprehensive policy framework, capable of safeguarding intellectual property rights, is contingent upon the advancement of AI technology.  

ESU suggests prioritising the use of AI tools that can offer complete transparency regarding the sources they utilise, in accordance with intellectual property protection rules, especially in educational and research settings, as opposed to employing non sources-transparent AI systems in similar contexts. 

4) Research on AI impact on education 

ESU believes public funding should be available to critically assess the impact of AI in  education. As AI is bound to further develop and its implications are still uncertain, the  role of research is essential. Furthermore, this research should be publicly supported  as the private sector and especially the EdTech sector can have obvious conflicts of  interest. The research should look both into the pedagogical dimension of AI and on its  human rights implications.  

A coordinated public strategy on artificial intelligence in HE is still missing, but starting  with publicly funded research on it, with common goals and standards for data  protection and elimination of biases, can be the first step towards making AI accessible  not only in the countries that are taking initiatives on their own, but for all. 

IV. Artificial intelligence and its pitfalls related to discrimination, privacy  and human rights 

While Generative AI can be very beneficial to enhance higher education, the way AI works also poses serious challenges with regard to fundamental rights. As Generative AI is modelled by humans and its content production is based on the data it is trained  on, AI systems are prone to reflect biases. Thus, generated content can reflect  stereotypes, use discriminatory language and visuals and can be culturally insensitive,  failing to recognise regional or cultural differences. The deployment of Generative AI for automated essay scoring and the personalisation of learning system run the risk of  being biased by the use of AI.  

Additionally, as Generative AI systems train themselves based on the input it receives  from users, risks of mishandling and exposure of user data exist, especially also with  regard to intellectual property rights. Lastly, there lies a commodification aspect in the  usage of Generative AI in education, as the technology is usually owned by private  enterprises, making higher education dependent on the private sector and placing the  ownership of the data produced through the use of AI systems on servers of  companies. 

Special attention should be paid to online proctoring, i.e. digital examination supervision with special software, which is intended to automatically detect attempts at cheating. The use of proctoring for exams can occur through live-proctoring (monitoring through a person in front of a computer) and automated proctoring  

(where algorithms, machine learning or AI are used) as well as through a combination  of the two. 

The range of proctoring software abilities extends from video surveillance with face, gaze and voice recognition, filming via the webcam of a device, the analysis of typing  behaviour, the restriction of functions on a device and automated plagiarism detection on content produces during the exam to comprehensive access to the exam takers’ digital devices, allowing for example for access to files on the device and information on other users in the same internet network. Furthermore, browser add ons allow for access to visited websites, access to clipboards and the browser settings. In cases where students install proctoring software on their device (instead of only an  add-on), access to almost all stored information on the device’s system and the possibility of changing the system permanently are possible.  

This poses problems towards students’ privacy and IT-security rights, as in the case of  live-proctored exams the conductor of the exam, usually teaching staff which are in  an unequal power-relationship with students, gain access to personal data. Furthermore, even if provisions are in place, the storage of private data on company or state-owned servers and linked to it the deletion of the collected data as well as potential misuse for profit-oriented gains are not transparent and trackable. 

Proctoring software is trained on data-sets of persons that are usually white able bodied men. This leads to biases in the software as women, people of colour and persons with disabilities are excessively often detected wrongly of cheating. Thus, decisions of AI systems based on faulty data can lead to grave human rights violations,  affecting vulnerable groups disproportionately. Additionally, the use of proctoring  software is known to negatively influence the mental wellbeing of students, with the  added stress potentially impacting student performance during exams. 

Given the high risk for infringements of student rights, ranging from privacy to IT security related to discriminatory infringements, the usage of proctoring software and  other Generative AI in education must be embedded in a human rights approach, following principles of proportionality between the usage of proctoring and Generative AI vis-à-vis students’ privacy, IT-security, anti-discriminatory and other fundamental  rights. 

Students need to be enabled to understand the potential for bias implications Generative AI systems and their implications have on their rights and always be given the option not to partake in proctored exams. Students’ intellectual property rights need to be always upheld. Automated essay scoring should not be trusted blindly but  

always undergo a last revision by a human being, for which training on the biases of  Generative AI is pivotal. The same applies to the usage and design of AI for the personalisation of learning systems, as it is crucial to detect biases to prevent damage  being done towards learners. 

V. Artificial intelligence and its pitfalls related to the environment 

The use of AI can have unwanted side effects for the environment. The rise in the usage  of AI comes with a significant rise in energy consumption, even so much so that experts  warn about the environmental consequences this could have. HEI’s should be aware of  this environmental impact when implementing AI in their curricula. 

VI. Regulating AI  

ESU is strongly in favour of regulating Artificial Intelligence and especially its application in education. We support the designation proposed by the Artificial Intelligence Act currently discussed in the European Union of using AI in education as  a high-risk deployment.  

Higher education institutions should have the right to restrict the use of AI when it is  intrusive and runs counter to privacy and human rights, such as for proctoring or automated decisions for assessments that impact students’ access to higher education. Agreements should be made at European and international level to ensure  cooperation in cross border situations of infringement of rights. ESU supports an  international approach to AI, which fully integrates the perspectives of the Global  South, which is currently outside the negotiations table for setting up the principles of  AI.  

For when it is used in higher education, AI deployment should follow two sets of rules.  For more sensitive applications, such as in decision-making, AI should be deployed with checks of compliance before use. For other applications, regular quality  assurance procedures after deployment should analyse the degree to which AI supports the quality of education, student-centred learning and learner agency, as well as inclusion. 

While such regulations should stem from national level, taking into account international agreements, higher education institutions should be able to decide themselves to what degree they integrate AI tools, considering various local factors. When deployed, institutional regulations should be designed for a coherent implementation, monitoring, and evaluation with stakeholders.  

Newsletter
sign-up

We make sure you
don't miss any news
Skip to content