Use of generative Artificial Intelligence in PGR programmes

ChatGPT was released in November 2022 and has proved to be a watershed moment in the development of artificial intelligence (AI).

Machine learning and other forms of AI have been used by the general public for many years already, albeit in less far reaching or explicit forms (e.g. predictive text, spelling/grammar checkers, social media algorithms) and of course many PGRs and staff rely on AI for their research. ChatGPT, and similar tools, have, however, heralded a new era of generative AI and it is this that we need to address.

What do we mean by generative AI?

For the purpose of this guidance, generative artificial intelligence (AI) is defined as the latest machine learning large language models (LLMs) which are capable of natural language processing and which can generate multi-modal content based on human commands.

Generative AI differs from other forms of AI in its ability to allow users to create new content (rather than manipulate or make predictions from existing content) and to do so easily, quickly and without specialist skills. Generative AI tools include ChatGPT, GPT-3/4, Dall-E, Midjourney, Grammarly GO (not the standard Grammarly), Notion, New Microsoft Bing, Google Bard (this list is not exhaustive). 

What are the opportunities and risks posed by generative AI?

For universities, generative AI presents huge opportunities with respect to learning and research but also potential harms, including unintended consequences of which we are, as yet, unaware. 

Safeguards need to be in place to ensure that generative AI is used appropriately and transparently, with due regard to:

  • For research - the University’s principles of honesty, rigour, transparency and open communication, care and respect, and accountability;
  • For assessment - the University’s  principles of equity, openness, clarity and consistency;
  • For all - regulatory aspects such as data protection and copyright. 

We need you, as PGRs, to be aware that you are responsible for maintaining a high standard of academic and research integrity and that if you use generative AI in your PGR programme you could be running the risk of committing academic or research misconduct.

For example:

  • plagiarism - e.g. if by using generative AI you use a source that you do not reference 
  • commissioning and incorporation - e.g. if you incorporate generative-AI produced (third-party) material as your own
  • cheating - e.g.  if you use generative AI as support in an oral examination  
  • fabrication or falsification - e.g. if you use generative AI to create or manipulate/select data/imagery/consents/references for your research 
  • misrepresentation - e.g. if you present generative-AI produced research as your own. 

You may find it helpful to refer to the PGR Academic Misconduct Policy

Interim position statements on generative AI use in PGR programmes

Whilst we await national guidance and developing sector (research councils and other funders, regulatory bodies, universities, and academic publishers) consensus on this issue, we have produced interim statements on the use of generative AI in PGR programmes. This is a fast-moving area and we will revisit these statements on a regular basis.

Use of generative AI in your research

Note that this statement refers to generative AI as defined above, NOT AI more broadly e.g. other forms of machine learning. 

If you are already conducting your research on or with generative AI, or you have an agreed plan to do so, AND you have been through the usual checks and balances in terms of gaining approval for your research topic and methodology, including discussing your methods in detail with your supervisor(s) and TAP (these discussions should be recorded on SkillsForge) AND, where appropriate, you have obtained ethical approval, then you should continue with your research but bear in mind the guidance below. 

If you are not already conducting research on or with generative AI, then we recommend that you await the guidance to be issued by University Research Committee.

Use of generative AI in the formal assessments of your PGR programme

You should not use generative AI to aid the production/delivery of any formal assessment for your  PGR programme (including, but not limited to reports for formal reviews of progress, the thesis (or equivalent) and the oral examination), other than use permitted under the University's Guidance on Proofreading and Editing

Use of generative AI in taught components of your PGR programme

If you are undertaking any taught components (such as modules) as part of your PGR programme then you must refer to the University’s information on the use of generative AI in taught programmes.  

How to avoid academic and research misconduct

  1. Avoid or be extremely cautious about using generative AI in your research. Keep careful records of how you have used generative AI in your research and make sure that this use is explicitly declared. Ensure your supervisor and TAP are aware of your use of generative AI and record those discussions in SkillsForge.

  2. Do not use generative AI in the production/delivery of formally assessed work for your PGR programme (including, but not limited to, reports for formal reviews of progress, the thesis (or equivalent) and the oral examination) OR for formative tasks (e.g. drafts of work or written updates for your supervisor or TAP reports) that may or may not feed into formally assessed work

  3. Keep records of your work in progress: Keep good (and where applicable, time-stamped) records of your work in progress (including notes, calculations, data generated/collected, report/thesis drafts,) and (where relevant) save different copies of your work in progress rather than overwriting the same file. A misconduct panel may ask for copies of your work in progress.

  4. Be ready to explain your work and how you produced it: If there is a suspicion of misconduct through generative AI use, you may need to attend a panel hearing on the case and explain how you produced your work.

  5. If using AI tools to correct your own spelling and grammar you must follow the Guidance on Proofreading and Editing. Many spelling and grammar checking tools, such as Grammarly and those integrated in Microsoft Word use integrated AI but generative AI tools go beyond what is permitted in the proofreading policy. Be extremely cautious about using generative AI to search for sources.

  6. If in doubt about what is or is not acceptable, seek advice as soon as possible from your supervisor(s) and TAP. They may wish to consult with your Graduate Chair, and they in turn may wish to consult with the Standing Committee on Assessment and Dean of YGRS. Discussion and decisions should be recorded on SkillsForge.  

We will assume that, by submitting a piece of work for consideration by your supervisor/TAP and/or for formal assessment, you are representing that work as your own and not the product of generative AI use (or, for that matter, as created or modified by any other person) unless you provide a written declaration explaining how you have used generative AI. 

We are working with departments to help them to detect generative AI use. If staff identify a piece of work that is likely to have been produced by generative AI (without falling into an exempt category), then it may be reported as academic or research misconduct (as appropriate). We  may use software to help us detect generative AI usage. 

We hope that you share our aim of ensuring that all PGRs are treated fairly and we welcome your help in encouraging each other to adopt appropriate practices. If you have any concerns about a colleague’s use of generative AI please speak to your Graduate Chair and they will provide advice.