Generative Artificial Intelligence presents fundamental opportunities and challenges to society. We are embracing this change by incorporating generative AI into our approach to teaching, learning and assessment.

As a University for Public Good, we must consider the significant ethical, environmental and practical challenges of incorporating this emerging and rapidly evolving technology. It is essential that we maintain the rigour and integrity of our awards and the vital role of the university in society.

We look forward to working with students, staff, the sector, schools and employers to ethically and inclusively embed generative AI. We are taking a progressive, sustainable and evidence based approach to prepare our students for the future workplace and society. 

The University of York contributed to and has adopted the Russell Group principles on the use of generative AI tools in education.

Introduction to Generative AI

Artificial Intelligence may be defined in a number of ways, for our purposes we will use the simple definition used by the UK government and JISC:

“Machines that perform tasks normally performed by human intelligence, especially when the machines learn from data how to do those tasks.”

Generative AI is an advanced form of machine learning defined by UNESCO as “an artificial intelligence (AI) technology that automatically generates content in response to prompts written in natural-language conversational interfaces” (2023, p.8).

Generative AI programs/tools, for example ChatGPT, use access to large databases, such as a snapshot of the internet or access to the live internet, to train themselves and then respond to human command to create new content with similar features to the dataset.

As generative AI develops it can be prompted to produce numerous outputs such as text, images, video, music, speech, computer code and design. This experimental technology is in a state of rapid development and therefore has significant limitations and biases.

Generative AI use in education

The rapid development of generative AI presents policy challenges. We are being flexible and regularly reviewing teaching, learning and assessment policies to react to significant generative AI developments while maintaining core principles.

In education, generative AI can be used for a huge variety of tasks such as text generation, language translation, question answering and text summarisation. They can also be used to create personalised content and improve search results.

Additionally, they can be fine-tuned for specific-use cases, such as computer coding and applications. There are significant ethical and sustainability considerations in the adoption of this technology. This can have advantages and disadvantages for teaching, learning and assessment, depending upon how it is used. There are concerns about academic integrity and data protection, but potential for accessibility and data analysis. 

Overview of assessment policy on generative AI

We have considered how generative AI relates to assessments and have produced the Policy on Acceptable Assistance with Assessments (including Generative AI, Proofreading and Translation), available on our Guide to Assessment webpage. Students can use this in conjunction with our Student guidance on using AI and translation tools.

While we do encourage a progressive approach to generative AI, we acknowledge there are assessments where the use of generative AI may undermine academic integrity.

In order to address this we have amended the Academic Misconduct policy to include a new offence:

False Authorship is the production or adaptation of academic work (for example writing, computer code, images, data), in whole or part, for academic credit, progression and award whether or not a payment or other favour is involved, using unapproved, undeclared or falsely declared human (eg family members, friends, essay mills or other students not taking the same assessment) or technological assistance (eg generative AI or software).