Guidance on the use of generative Artificial Intelligence in PGR programmes
Why does Generative AI have these limitations?
Generative AI tools:
- use statistical prediction - they are not rational or critical and typically there is no evaluation or fact-checking of material
- draw on the information they are trained on and have access to (typically the internet) and this information:
- may be wrong, flawed, limited (eg exclude information that is non-digital, not on the internet, or behind an internet paywall) or lack currency (some generative AI platforms only have access to information up to a certain date)
- is likely to have been created by other people and may be subject to copyright or other intellectual property rules and laws
- rely on algorithms which may bias the outputs in a non-transparent manner
- may not provide information on the sources used
- are subject to user error.
The risks of using generative AI
As a PGR, you are responsible for maintaining a high standard of academic and research integrity and for adhering to the University's principles for the use of generative AI in PGR programmes. If you use, or misuse generative AI in your PGR programme you may be committing academic or research misconduct:
- Plagiarism: for example, if you use a generative AI output (eg text, image, idea) without sufficient acknowledgement
- False authorship: for example, if use generative AI to produce or adapt material (eg writing, code, images, data) that you present as your own
- Cheating: for example, if you use generative AI to provide support in an oral examination
- Fabrication or falsification: for example, if you use generative AI to create or manipulate/select eg data/images/consents/references for your research
- Misrepresentation: for example, if you present generative-AI produced research as your own.
- Breach of duty of care: for example if, via your use of generative AI, you commit a breach of data protection rules, or share sensitive or confidential data.
Guidance on AI usage scenarios
Using generative AI wisely
Follow these steps to ensure that you're using generative AI wisely and adhering to the principles for the use of generative AI in PGR programmes:
Become generative AI literate and know what the University offers in terms of generative AI
Read guidance on what GenAI tools are recommended at the University and guidance and policy for their responsible use.
Google Gemini is the University’s preferred GenAI tool because any data inputted is fully protected and chats are never used for training or human review by Google. You must sign in to Gemini using your University credentials to ensure this.
Involve your supervisor(s) and TAP in your work and any possible use of generative AI
Share and discuss work in progress with your supervisor(s) and TAP.
Discuss any planned use of generative AI with your supervisor(s) and TAP, gain their approval where necessary, and record the outcomes of your discussions on SkillsForge.
Be aware that generative AI may be integrated into standard software packages
Increasingly, generative AI is becoming integrated into standard software packages (eg Copilot is integrated into Microsoft products). You need to be aware of this so that you do not find yourself using generative AI when you did not intend to. To avoid this issue, consider turning off any integrated generative AI features in the software you use on your personal computer, and be particularly vigilant when using a non-personal (eg University) computer where integrated generative AI features may be activated within software packages as a default.
Read and understand the University’s policies on responsible research, and academic and research misconduct
Undertake the Research Integrity Tutorial (this is mandatory and should be completed before your first TAP meeting).
Read and make sure you understand the relevant University policies:
- PGR Academic Misconduct Policy
- Policy on Transparency of Authorship in PGR Programmes (including generative AI, proofreading and translation)
- Responsible research and research misconduct
- Academic Misconduct Policy (this only applies if you are undertaking taught modules)
Consider undertaking a Research Culture and Researcher Development course on research integrity. Discuss any queries you have with your supervisor(s) and TAP.
Be cautious and critical about using generative AI in your research
- Ask yourself whether using generative AI in your research is the right approach or whether an alternative might be more transparent or more in keeping with your development as a researcher.
- Ask yourself if your use of generative AI is replacing your own effort or simply replacing another technological tool - be particularly wary of the former.
- Check whether your funding body (if applicable) has a policy on generative AI use.
See the guidance on AI usage scenarios for more information.
Avoid generative AI in the production/delivery of formally assessed work
It is recommended that you avoid using generative AI - in any way - in the production/delivery of formally assessed work, including, but not limited to reports for formal reviews of progress, the thesis (or alternative assessment format) and the oral examination AND for formative tasks (eg drafts of work or written updates for your supervisor, TAP reports, ethical approval forms) that may feed into formally assessed work.
This is because the boundary between acceptable and unacceptable practice may not always be clear, and it may be tempting and easy to move from acceptable to unacceptable practice.
Obtain ethical approval
If you and/or your supervisor(s) think there may be any ethical implications to your use of generative AI you must apply for ethical approval as soon as possible. If in any doubt, contact your ethics committee and ask for advice.
Practise good data management
You are required to have a data management plan: use this to help you consider any risks associated with generative AI use. If you upload private, sensitive, confidential or embargoed data to a generative AI tool you are likely to breach privacy or intellectual property rules, or break a contractual or licensing agreement.
Keep records and acknowledge/reference your generative AI use
Keep good records of your use of generative AI.
Always acknowledge your use of generative AI and correctly cite it in any work you submit for your programme - see the advice below on correct referencing.
Evaluate generative AI outputs
Evaluate the reliability, accuracy, currency and impartiality of any generative AI output. Check for any infringement of intellectual property rights (including copyright).
Keep records of your work in progress
Keep good (full and where applicable, time-stamped) records of your work in progress (including notes, calculations, data generated/collected, report/thesis drafts,) and (where relevant) save different copies of your work in progress rather than overwriting the same file. You need to be prepared to explain how your work was produced, and how your thinking has evolved over time, and provide evidence to back this up.
Stay informed
Generative AI is a fast moving area. Keep up to date with developments in the technology and guidance on how to use it. Engage with your fellow PGRs, members of staff, and experts in the field to share knowledge and good practice.
Think ahead
If you are planning to publish your research, determine the likely publishers and find out what their policies are on use of generative AI to make sure you are not setting yourself up for issues in the future.
If your research is on generative AI
If your research is focused directly on the use of generative AI (for example, exploring biases in generative AI tools or looking at the impact of generative AI tools on the propensity of students to commit academic misconduct) then it is possible that, in order to conduct your research, you may need some exemptions from the standard restrictions on the use of generative AI set out in this document. You must discuss any such scenarios with your supervisor and TAP as soon as possible and potentially obtain ethical approval.
Referencing the use of generative AI use
You must correctly reference your use of generative AI.
Setting up a prompt/response repository
It is recommended that you set up your own prompt/response repository to ensure accurate record keeping and ensure that you reference generative AI use correctly. This could be a spreadsheet or other document where you record the information you need for referencing alongside the response you receive.
Referencing generative AI: how to reference generative AI outputs
If you include or refer to any generative AI output in your work, you must reference it correctly and not present the generative AI output as your own work (eg text/image) or thought. Failure to reference generative AI outputs correctly is academic misconduct.
Current consensus is to treat a generative AI output as if it were private correspondence. This is because, like private correspondence, a generative AI output cannot be easily replicated and verified, and each prompt and response session is unique to you at that moment in time.
It is expected that references to generative AI outputs will include: (i) the name of the generative AI platform, (ii) the date of use, (iii) the person who input the prompt.
You should keep records of prompts used and the responses received from generative AI, even if you do not use these in your submission.
Advice on how to cite private correspondence within the various referencing styles is available from the Library.
eg for Harvard referencing the in-text citation might look like this (OpenAI ChatGPT, 2024) and the corresponding reference might look like this: OpenAI ChatGPT (2024). ChatGPT response to YOUR NAME, 1 January 2024.
Referencing generative AI: how to reference the use of generative AI as a tool
If you use generative AI in your work, you must, in most cases, acknowledge this: failure to do so is academic and/or research misconduct (depending on the type of use).
It is expected that references to generative AI use will include: (i) the name of the generative AI platform, (ii) the date of use, (iii) the nature of the use.
You should keep records of your use of generative AI.
Referencing generative AI: how to reference the use of generative AI as a proofreading tool
This is covered in the Policy on Transparency of Authorship in PGR Programmes (including generative AI, proofreading and translation) .
Referencing generative AI: how to reference other forms of generative AI use
It is recommended that you use the ‘database’ format, with the addition of information about the specifics of the use.
For example:
OpenAI (2024). ChatGPT. Version X. [Online]. Available at: URL [Accessed 9 April 2024]. To translate the Welsh poem Rhyfel (out of copyright) into English.
Glossary
Explanation of terms
Artificial intelligence (AI). Machines that perform tasks normally performed by human intelligence, especially when the machines learn from data how to do those tasks (JISC).
Generative AI: an artificial intelligence (AI) technology that automatically generates content [text, images, video, music, code etc.] in response to prompts written in natural-language conversational interfaces (UNESCO).
Generative AI tools include (this list is not exclusive and is constantly growing): ChatGPT, Microsoft Copilot, Google Gemini, Anthropic’s Claude, Meta’s Llama, Midjourney, DALL-E, RunwayML, Pika, AIVA.
Large Language Model (LLM): a form of generative AI that is tailored for text-based tasks.
Responsible research: research that meets the University’s research integrity principles of honesty, rigour, transparency and open communication, care and respect, and accountability.
Further reading
Discover how we're embracing AI at an institutional level; ethically, innovatively, and safely. Learn about our position on the use of AI in education and how we’re shaping its future in academia.
Learn more about how AI tools can be used in our teaching, learning and research
Acknowledgements
This website has drawn from guidance issued by the Department of Computer Science at the University of York. It has also been influenced by guidance issued by other universities, with a particular mention to that issued by the University of Manchester and the University of Glasgow.