BM84: Resolution on the AI advances and its impact on higher education

Share it:

The last months have seen headlines all over the world describing and analysing the
impact of Artificial Intelligence over society, culture, education, or workplace, and
likewise divergent calls to action on how to approach it. The underlying cause for this
upheaval is the exponential increase and impact of the power of generative AI,
especially ChatGPT, to perform duties once reserved for human activity and disguise
the origin of content.

ESU believes Artificial Intelligence will have an everlasting impact on society and
(higher) education. Even more so than digitalisation, artificial intelligence tools,
without the wrong limitation to only ChatGPT as generative AI, can support an
innovative and student-centred higher education. Use of AI in classrooms, as well as
in the individual learning, can support personalised activities based on skills and
needs, adapted learning paths, offer guidance and feedback and increase
experiential learning. It can select data and compile learning resources for teachers
and students in a pace never seen before. By its impact on the need to adjust
assessment policies, it can lead to more learning outcomes-focused, skills-based
holistic assessment. Considering all these different ways of using AI within HE, it is of
paramount importance to pay attention to the training of teachers, academic staff
and students to promote an informed and critical use of artificial intelligence. Every
person within HEIs needs to be trained to be able to make the best use of the new AI
systems they are expected or recommended to interact with, but most importantly
they need to be educated on what their digital rights and responsibilities are.
However, understanding of the impact of AI in society and education is still scarce
and in the development phase, with more and more actors understanding that AI is
not a solution for everything, and that it has crucial pitfalls that need to be addressed
in order to mitigate potentially consequential and destructive impacts. The capacity
to understand the potential ability of machines to act similarly to humans is getting
less and less opaque as innovation is being released through new tools and new
scientific research. The research and expertise is even more scarce in terms of the

impact of AI in education, and ESU calls for public funding for open research projects
in this direction, as otherwise we can only hypothesise its real impact.
It has been revealed, nevertheless, that based on human prototypes and patterns
that it has been created from, prejudice of AI is high, with a negative impact on equity
and equality based on AI biases against minorities and gender. This can have even
more unacceptable outcomes if AI is used as a means to make decisions, base
assessment results only on AI or use for data analytics, a move that can in practice
downsize the importance of student real feedback. Thus AI should be used as a tool
to help decision making but not to replace humans. It is important not to forget that
AI biases that lead to discrimination come in most cases from issues in the datasets
used by the algorithms and not from the formulation of the algorithms themselves. It
is therefore crucial to demand that the datasets used by AI systems are as curated
as possible, as well as public in their characteristics ensuring a diverse representation
both within the sample and in the group of people working on its realisation, as well
as a fair compensation of the work done.

In spite of its ability to answer classic-formed problems it has been trained for,
generative and decision-making AI are still in the the early stages of development
and have proven to be easily biased, wrongly used or providing absurd outputs. ESU
believes that AI should be considered, for the moment, as a key tool in many fields to
assist learning and teaching, decision making and production, but should never be
given responsibilities, as mistakes that occur due to AI can become more serious
depending on the use humans make with it.
AI based decision making should be double checked by humans and should be
made public, especially in the field of (higher) education. Understanding the process
of bias and mistakes requires transparency, and this is something the society cannot
afford to ignore.

Furthermore, even more than in the case of digital platforms, the privacy risks posed
by AI are high, as AI is not only able to collect and store personal data, but also to
connect the dots to make use of the personal data with a purpose.
The most discussed impact of AI so far, albeit only on the surface and uncovering a
one-way only perspective, is in relation to the potential or actual illegitimate use of AI
by students and its effect on academic integrity. Indeed, universities have rushed to
address how AI can create a surge of plagiarism in course assessments and exams,
going even to the point of banning the use of AI. Would-be solutions have been also

devised, such as checking whether content is produced through AI by AI tools,
however without evidence of accuracy. ESU believes the paradigm in which this
discussion is placed should focus not on students as wrongdoers, but rather on how
teaching and assessment policies should be adjusted to cater for or consider the
potential use of AI, as well as areas where AI cannot be used. Students should be
involved in the assessment of the potential of AI as they are one of the main
stakeholders. Furthermore, the impact of AI on false diplomas, false records or even
false applications has not been yet discussed and the misconduct committed with
the use of AI has not been fully defined.

All these pitfalls lead to the conclusion that regulators and higher education
institutions need to be careful when implementing AI in education, tailoring it based
on benefits but ensuring mitigation of the risks. ESU doesn’t believe that AI should be
banned, as that would be an unproductive and inefficient action, deflecting
innovation, but rather regulated by binding provisions on the use of AI. As medicine
products cannot be launched to the market without serious safety checks, AI
shouldn’t be deployed in education without robust monitoring and quality assurance.
Generative AI can inform policies, but should not be able to take any decision or do
student assessment, and any institutional procedure that uses AI should be
transparently disclosed. Academic staff and students should receive adequate
training on how to use AI, its positive features and risks, as well as assistance on its
use, in order to ensure inclusivity. The need for critical thinking and tackling
disinformation only increases as AI becomes more powerful and widely deployed.
Policy making should also take into consideration the impact all the spectrum of AI
has on (higher) education, even though the light has been put on ChatGPT for its
ability to answer classic-formed problems addressed to students in the context of
academic evaluation.

Hundreds of tools are emerging and their impact should be evaluated on every level,
and their mastering should be included in the teaching programs for any field from
the first bachelor year to the PhD. Understanding the stakes around AI and how it
changes the methods in any field should be a key focus for the teaching of the next
decades. At the same time, it should become an imperative to declare when an
AI-based tool is used to produce content.
ESU calls for a human rights-based approach on the use of AI, analysing its impacts
on human behaviour, society, and fundamental rights, as well as on the interaction

between humans and machines, as a manner to guide policies and regulations in AI.
To this end, regulations on who, where and when can provide services with the AI
should be developed. ESU welcomes the initiatives of the Council of Europe in the
Committee on AI and Committee of Education aiming at a roadmap towards a
human rights-embedded legal instrument on AI and seeks to contribute to this

Proposers: EC; FAGE, France; UDU, Italy


We make sure you
don't miss any news
Skip to content