
Responsible AI Use in Research
Policy & Best Practice Document for Researchers
The Responsible AI Use in Research Policy & Best Practice Document for Researchers provides Policy and Guidance on the responsible use of AI in research. It has been approved by URC following extensive consultation with departments, coordinated by the AI in Research group led by Dr Jennifer Chubb, Department of Sociology and PVCR Matthias Ruth.
The best practice document is a living document and a working group on AI use in Research monitors changes in the sector to keep this up to date. The University has also published Guidance on the use of Generative Artificial Intelligence in PGR programmes as well as more general guidance on generative AI tools on the IT Services webpages.
The document is split into sections below to allow easier navigation or can be downloaded as a PDF: Responsible AI Use in Research (PDF , 482kb).
Contact us
Policy, Integrity and Performance
policy-integrity-performance
Dr Jenn Chubb
Academic Lead
jennifer.chubb
Department of Sociology
This document provides Policy and Guidance aligned with the European Commission (EC) principles on the responsible use of AI in research and The Russell Group’s Principles on using Generative AI in Higher Education (HE). It has been approved by URC following extensive consultation with departments, coordinated by the AI in Research group led by Dr Jennifer Chubb, Department of Sociology and PVCR Matthias Ruth.
This document is organised into essential requirements (related to existing non-negotiable policy) and recommended best practice to support responsible AI integration at each research stage against the context of our guiding responsible AI in research values policy.
For advice and guidance regarding teaching & assessment, researchers should visit Staff Guidance on Generative AI.
This document is about the use of AI in research and does not apply to developing or building AI. For more information, see the University’s Code of Practice on Research Integrity.
Research students must also consult the Guidance on using generative AI in PGR programmes and abide by the Policy on Transparency of Authorship in PGR Programmes.
- This document provides Policy and Guidance
- Researchers are expected to use their own judgment in how they apply the best practice guidance and how it relates to their own research
- There is no strict boundary defining AI, but the most important issues arise from commonly used tools, whether commercial or open source, rather than those developed by research teams for specific research objectives. See below for further details: What is AI?
- This document aims to help researchers thoughtfully consider the responsible use of AI in their work, including evaluating whether its use is necessary or appropriate.
- The document is divided into sections across the life cycle of research to highlight best practice. There may be overlap and repetition across life cycle stages. Researchers are encouraged to use their judgement and consult this document closely when using or when advising others about the use of AI in research.
AI is often used as an umbrella term referring to a range of algorithmic based technologies which mimic human cognitive abilities. Neither ‘Artificial Intelligence’ nor more recent terms like ‘Generative AI’, ‘foundation models’, or ‘frontier models’ have consistent scientific definitions and are used for different purposes by different organisations and companies.
From a user perspective, the following distinction may be useful for understanding different AI functionalities.
We consider AI as belonging to two categories:
While all AI systems make predictions, predictive AI focuses on forecasting outcomes, trained on historical data, whereas generative AI creates new content, such as images or text, based on input data.
Traditional Machine Learning or predictive AI leverages historical data to forecast future trends. It uses techniques like classification (categorising data based on past information), clustering (grouping similar data), and time series analysis (examining data over time). This type of AI is used for identifying patterns, making informed decisions, and anticipating future outcomes.
Generative AI is focused on creating new data or content by learning from existing patterns and responding to user prompts. This can include generating text, images, music, or even entire virtual environments, making it a potentially useful tool for creativity and innovation.
There are different ways you will come in contact with AI dependent on your job:
Research: You may consider automating literature reviews, streamlining data collection, ideation and performing complex data analysis.
When AI is the subject of the research: When AI is used to explore its capabilities, limitations, ethical implications, and potential advancements to improve its applications.
Daily administration: Using AI for routine administrative tasks, such as scheduling, email management, and document organisation. This might include drafting emails or summarisation.
University operations: You may find that professional services and operations begin to use AI in university processes.
Unless you are using a specific form of predictive AI or for instance, using AI as a coding tool for data analysis or exploration tasks, most of the tools you are likely to encounter will be based on Large Language Models (LLMs). These tools mimic or replicate human linguistic interactions, taking natural language prompts as inputs and producing linguistic items such as conversations, summaries, reports, lesson plans, and powerpoint presentations as outputs. Some of these will be general purpose, drawing the content in their outputs from internet searches (e.g. Google Gemini) and others will be special purpose, drawing from restricted sources for Retrieval Augmented Generation (RAG) (e.g. ResearchRabbit). Google’s recently launched NotebookLM allows RAG on sources you upload.
Important takeaway:
The University has licensed versions of both Gemini and NotebookLM which do not use inputs for training future models. Even when using a University licensed tool, researchers are responsible for ensuring they are alert to how the application treats their data and any possible issues with data protection, copyright and consent.
Artificial Intelligence is providing new ways to conduct and manage research and in some cases, changing the nature of research itself. Although not exhaustive, a recent report described three general roles of AI for scientific research - as either a computational microscope, providing advanced simulations and data representation; a resource of human inspiration, identifying sources and areas of interest in the literature; or an agent of understanding, transferring scientific insights to a human expert or acquiring new scientific understanding (Krenn et al., 2022). Generative AI can offer very convincing illusions of human-like understanding, however, researchers need to stay alert to the considerations relating to the use of AI tools within research projects concerning quality, bias, reproducibility, information degradation, and ethics.
Our guidelines focus on Responsible AI practice, particularly with widely available 'off the shelf' tools, while remaining inclusive of numerical methods and adaptable to future advancements. The most important issues arise from commonly used tools, whether commercial or open source, rather than those developed by research teams for specific research objectives. This document is about the use of AI in research and does not apply to developing or building AI. For more information, see the University’s Code of Practice on Research Integrity.
As researchers, it is our responsibility to stay informed about debates concerning the responsible use of AI in research to ensure appropriate and ethical use of AI in line with University policy and practice. You are encouraged to incorporate a reflective approach, moving beyond simple checks and tick-boxing to critical reflection.
There will be times when the use of AI can be a support to researchers who wish to innovate with ideas generation and summarisation. However before using AI, researchers are encouraged to reflect critically on the value added - AI makes mistakes and false claims, it is uncritical and produces generic content. AI bias and ‘hallucinations’ - generating completely false or misleading information - are both issues that can arise. AI can reinforce and amplify existing inequalities and biases, especially when trained on historical data that reflects societal disparities. In research, this can manifest in biased citation recommendations, unequal access to funding, and underrepresentation of marginalised groups in scholarly publishing, perpetuating systemic disadvantages.
The results from generative AI usually have guard rails to avoid harmful or dangerous content from being generated, which introduces an additional bias that researchers should be mindful of. In addition, researchers should be aware of degradation which occurs when AI-generated content draws on a body of sources that contain other AI generated content which will happen with increasing frequency as public content contains more AI generated content.
We want our researchers to feel supported and encouraged in their work. While researchers may be tempted to use AI to ‘speed up’ the academic research process (Chubb et al., 2021), researchers are encouraged to keep the principles of a healthy research culture in mind and use these to guide their activities. In some instances, the use of AI will add a further level of complexity and additional opacity to the process of institutional ethical review. Currently, the use or development of AI tools does not, in itself, require researchers to carry out ethical review or Date Privacy Impact Assessment (DPIA). Researchers should speak to their Local Research Ethics Chair if they are uncertain about AI in ethics procedures.
All work, representing the results of research, must comply with the University of York Code of Practice on Research Integrity.
Summary
- Use AI in a manner that aligns with ethical standards and university policies.
- Move beyond simple compliance and engage in deeper, critical reflection on AI's impact.
- Be alert to potential issues with AI, such as mistakes, false claims, biases, and the risk of reinforcing societal disparities.
- Recognise AI's potential to support innovation in research while also cautioning against over-reliance.
- Consider how AI can perpetuate existing biases and inequalities, particularly in research contexts.
- Consider the degradation of content quality when AI-generated content is based on other AI-generated content.
- Work towards contributing to a healthy research culture .
- Adhere to institutional guidelines, such as the University of York Code of Practice on Research Integrity, and seek guidance when needed.
This document aligns with the information classification scheme: https://www.york.ac.uk/staff/policies/information-policy/info-policy-and-you/classification/
Special category data: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/lawful-basis/a-guide-to-lawful-basis/special-category-data/
The AI in research document is underpinned by a set of guiding values.
1. Accountability and responsibility
Researchers hold responsibility for their use of AI in research, ensuring AI tools do not substitute human critical judgement.
2. Honesty, transparency and attribution
Researchers must document when and what AI tools are used and their influence on the research and dissemination process, ensuring compliance with academic and research integrity. Gemini (and ChatGPT and others) will record all your prompts and conversations for a period. To create a permanent record researchers may wish to use this example AI Prompt Record.
Researchers must check with funders’ and publishers’ policies about attribution. For instance, according to UKRI, researchers do not need to cite AI use in developing grant proposals, which do not have named authors in the same sense as publications.
If researchers do use AI in their research, to ensure transparency, it is good practice for researchers to record the prompt they chose to arrive at the output generated by AI, in line with open research practises. Crucially, researchers must use great care when using AI tools to profile or make automated decisions about individuals, especially where human review is not intended.
3. Privacy and confidentiality
Researchers must protect the privacy of their participants' data. If planning on using AI to process personal data, researchers must clearly explain to participants in information sheets and consent forms how their data will be used, including which tools. Researchers must also establish clear accountability lines for data handling and AI usage across the research team.
Confidential data is data where access is restricted due to legal, ethical or contractual requirements. Research which includes confidential data must not be shared with AI systems without assurances from IT services regarding data protection, intellectual property and information security. Appropriate contracts must exist between the AI supplier and the University.
Researchers must regard open source models as an AI supplier as well in the context of research data management. The implications of AI use need to be considered more broadly than legality and formal compliance, and researchers are expected to have an awareness of this when deciding which tools to use.
4. Data protection
Researchers must protect two kinds of data - the data from/about others used to conduct the research, and the data/information generated by the research itself. Both must be properly protected. Researchers are reminded of the need to align their use of AI to data protection principles i.e., that data must be:
(1) processed fairly, lawfully and transparently;
(2) processed for specified, explicit and legitimate purposes;
(3) adequate, relevant and limited to what is necessary;
(4) accurate and, where necessary, kept up-to-date;
(5) retained for no longer than necessary; and
(6) kept secure.
This needs additional GDPR/DPIA considerations. Researchers must implement robust security measures including the use of encryption and secure storage as with all aspects of research, and follow the guidelines and policies outlined in this document before uploading research data into any AI tools.
Whilst alignment with Data Protection principles is essential, privacy is a different concept. There are plenty of concerns about privacy within practices fully aligned with data protection principles.
5. Compliance
Researchers must adhere to funders’ and institutional policies on responsible AI use particularly regarding intellectual property (IP) rights, data protection, and research ethics. Researchers must use IT approved solutions, and seek recommendation from IT services when exploring the use of tools outside of the list of IT approved tools for university research.
6. Research Ethics
Where activities fall within the scope of the University’s ethical framework, as described in Code of Practice and Principles for Good Ethical Governance, the activities should be formally considered and signed off through the University’s governance structures. When using AI, particular care must be taken when collecting, handling and storing sensitive, classified and/or personal data. Researchers must consult the relevant ethics committee as needed when using AI in sensitive areas of research, such as profiling or automated decision-making. The University is reviewing how to consider inclusion of AI in the ethics process.
7. Sustainability
The training of LLMs requires considerable energy to carry out the computational and data management tasks. The environmental impact of AI is everyone's responsibility; researchers using AI must minimise resource consumption, and remain mindful of the environmental cost of AI use, working to minimise the environmental footprint of their research activities in line with the University’s commitment to sustainability.
8. AI Literacy
Researchers must stay informed on best practices for AI use and share knowledge with colleagues. Staff are encouraged to engage with university training led by IT on approved tools before using external AI solutions. A workshop introducing researchers to the use of generative AI in research is part of the annual York Researcher Development Programme, and is open to both postgraduate researchers and staff. The Library offers regular training or drop in sessions and the University is in the process of streamlining its training and development offering.
How to use this document: This document is intended to support decision-making about AI use in research. Researchers should note that while different stages of the research process require attention to different issues, there are also aspects of AI use that are common to all stages. This is addressed in the Q and A section.
This document includes a set of guidance, or guard rails for each research process stage, and a Q&A section. The document is the result of consultation with key members of academic and professional staff who form the AI in Research Task and Finish Working Group. This is a living document and will be regularly updated and considered by the AI in Research Working Group (Academic Lead Dr Jenn Chubb, Department of Sociology).
While the elements included in this document are aimed at raising awareness of best practice in the responsible use of AI in research. Where elements of the guidance relate to existing university policy, the points below are mandatory.
- Researchers should remain active and critical about the role of AI in the generation of hypotheses/research questions/aims/goals or in setting research questions/goals, ensuring AI complements, rather than replaces, human decision-making.
- Researchers must not use AI in the development of ethics applications as this could indicate that researchers have not fully engaged with the ethical implications of their own research.
- Researchers should critically reflect on and fact-check AI output to ensure that the AI tools they use do not perpetuate or amplify existing biases, which could lead to unfair hypotheses or ideas generation.
- Researchers remain responsible for the hypotheses and research questions generated by AI. They should critically evaluate and validate these hypotheses, ensuring they are scientifically sound and ethically justified.
- AI might reproduce content which should be attributed to a person. Therefore researchers should not assume attribution to AI and should actively check for correct attribution.
- Use of AI should be limited to preferred tools approved by IT services for idea generation.
- Document and disclose the specific AI tools and data sources used, noting the necessity to validate AI-generated hypotheses and provide proper attribution. For instance, researchers should record prompts and/or export prompts from the tools themselves. To do this, researchers could use this AI Prompt Record sheet template as an example of how and what to record. Researchers can also export memory from certain tools but this would be their own permanent record.
- In applications for ethical approval by a research ethics committee (internal or external), use of AI must be explicitly described.
- Researchers must make sure they protect the privacy of any participants' data. Before using AI, make sure all personally identifiable information is removed, or use a pseudonym.
- If planning on using AI to collect or analyse data, clearly explain to participants how their data will be used, including with which tools including third parties data will be shared with.
- Researchers must also establish clear accountability lines for data handling and AI usage across the research team.
- Researchers must not share personal identifying data with AI tools.
- Research, especially unpublished work and third-party content, must not be shared with AI systems without assurances regarding data protection, copyright, intellectual property and without anonymisation.
- Researchers in doubt about the use of AI in research should seek guidance from the relevant Local Research Ethics Committee Chair or University Academic Ethics and Compliance Committee this includes reporting where AI has compromised/harmed the research/research participants/research data in some way.
- AI tools must comply with the University’s Information Classification Policy and data protection guideline as outlined in Value 4 of the policy.
- Researchers must comply with the Research Data Management Policy.
- Following The Research Data Management Policy and The Code of Practice and Principles for Good Ethical Governance, when processing personal data, researchers must follow strict adhering to the principles of informed consent and ensure the information (in relation to AI use) that potential participants need to have to make that decision is securely stored, and use only what is necessary.
- Researchers must clearly secure consent for data sharing, adhering to data protection principles (e.g., data minimization, explicit purpose, and lawful processing).
- Researchers should use IT approved AI tools, or seek approval for tools from IT not on the approved list.
- Data should be managed in a way that prevents accidental sharing or sharing more data with the tool than intended as per the Research Data Management Policy and the Code of Practice and Principles for Good Ethical Governance.
- Researchers are encouraged to engage with relevant training both internal and external to the university.
- Transparency is essential in data analysis and interpretation. Researchers should document how AI has been used in analysis of their research. To do this, it's best practice to keep a track of prompts given to AI tools. For instance, researchers could use the template of an AI Prompt Record or export memory from the tools themselves.
- Researchers should use specific, IT approved AI transcription tools to ensure data protection. They should also check for accuracy and comply with the University’s data protection and privacy policies.
- Research, especially unpublished work and third-party content, should not be shared with AI systems without assurances regarding data protection, intellectual property, and without anonymisation.
- Researchers should
- Researchers must not provide third parties’ personal data to AI systems following the Code of Practice and Principles for Good Ethical Governance.
- Where generative AI aids in data analysis, researchers should document the prompts used in identifying patterns or themes. This includes being transparent about AI’s decision-making process wherever possible (common AI uses in data analysis may include tasks such as coding linguistic data, conducting qualitative content analysis, or using synthetic data for simulations). Each of these requires clear documentation and, if necessary, ethics committee review to confirm compliance. Specific use cases (e.g., orthographic transcription of speech or simulated voices in linguistics research) may require distinct ethical considerations and transparency in reporting. Researchers might like to use the template provided on how and what to record or export memory from the tools themselves, see AI Prompt Record.
- Researchers should be aware that the use of AI may not be acceptable for certain types of approaches/research paradigms. For instance, with respect to interpretive research and techniques e.g. thematic analysis, it would not be considered appropriate to use an AI tool to develop themes. In this case, it is important to consider that AI may not be suitable for all types of analysis/disciplines.
- Researchers must avoid AI use as the sole generator of new academic content. Depending on details, using AI for writing parts of academic papers may also be considered research misconduct.
- Research, especially unpublished work and third-party content, must not be shared with AI systems without assurances regarding data protection, intellectual property and without anonymisation.
- Researchers should not attribute authorship to AI systems. AI systems are not authors or co-authors, as authorship implies agency and, very importantly, responsibility, both of which lie with human researchers.
- Researchers should adhere to COPE principles: "being accountable for the work and its published form".
- Postgraduate researchers (PGRs) must follow the Guidance on using generative Artificial Intelligence in PGR programmes and abide by the Policy on Transparency of Authorship in PGR Programmes.
- Researchers may use AI for productivity tasks (e.g., adapting tone for journal submissions, drafting lay summaries). However, usage must follow institutional, publisher and funder guidelines on attribution, and AI should not replace essential skill-building activities, especially for students, PGRs and early career researchers.
- Researchers should be cautious in the use of AI that is used to rewrite or improve drafts, taking clear note of the need to check the output of their work.
- AI should not be used in peer review, research evaluation (REF) or reviewing funding applications, (as per UKRI guidelines).
- Generative AI tools should not influence evaluation such as peer review or committee work for funding bodies.
- All researchers should check with third parties e.g. publishers/ funders/ committees regarding their policies on the use of AI in research. For instance, uploading a grant proposal to an AI is in breach of specific confidentiality requirements of most funders.
- Researchers should give consideration to the appropriateness of the University's Approved Generative AI tools before deciding to use them.
- Because of specific considerations relating to intellectual property (IP) and commercialisation, researchers should consult RIKE regarding IP and AI use.
- Researchers should exercise ethical judgement in using AI for impact assessments, ensuring AI-generated metrics do not override human assessment.
- AI tools used to measure and record research impact should be carefully selected from the pre-approved list of IT tools.
- In light of the University’s commitment to sustainability, when tracking citations or social media mentions through AI, researchers should critically evaluate the appropriateness of using AI over other tools, the AI's role and the reliability of its findings.
- Researchers should document any limitations of AI-driven metrics, and maintain a balanced approach to impact evaluation that includes human judgement to counteract AI’s potential for bias.
Approval body: |
University Research Committee |
Policy Owner: |
Pro Vice Chancellor - Research |
Responsible Service: |
Research, Innovation and Knowledge Exchange |
Policy Manager: |
TBC |
External regulatory and/or legal requirement addressed: |
N/A |
Equality Impact Assessment: |
Not relevant for this policy |
Approval date: |
TBC |
Effective from: |
TBC |
Date of next review: |
No later than one year after implementation |
A publisher has asked me to sign an agreement for using my publications in AI training licensing. I don’t know whether to sign or decline this. What should I do?
It's becoming increasingly common for publishers to seek agreements for using publications in AI training. However, signing such an agreement involves several important considerations. This kind of request is not just a request for commercial use of your work - as a reprint or translation would be - instead it is the creation of a derivative work over which you, as author, have no control.
Researchers should carefully consider their position before opting in to AI-related addendums proposed by publishers. While some publishers present agreements that doing so is beneficial for esteem, visibility and long-term impact, the actual effects remain uncertain. You are encouraged to reflect on the following:
- Publishers have not always disclosed which AI providers they are working with or the terms of these agreements. Understanding where and how your work may be used is essential.
- While some publishers strive for fair attribution, the mechanisms for ensuring proper credit and compensation are not yet clear.
More publishers will likely enter similar agreements. Opting in now may set a precedent that affects the future value and control of research outputs. At this stage, researchers should make their own choice based on their priorities, and a careful assessment of the risks and benefits. Given the number of unknowns, a cautious approach is advisable until clearer frameworks emerge.
Should I log my prompts when using Generative AI to provide transparency?
Recording prompts is good practice. There are some fundamental difficulties inherent in prompting - getting the output you want can be a really difficult and nonlinear process: writing the prompt, rewording it, asking the model to clarify certain aspects of a prompt, rewrite other sections, and so on. Researchers should know that using generative AI is more than just one prompt as input = one perfect output. This can cause some complications in terms of transparency, especially when logging prompts. Generative AI has prompt scripts which include additional material for every user prompt. These are not public and can change. Anthropic shows a full version history of the system prompt being developed, for instance.
Why is Gemini the University ‘preferred tool’?
Gemini is the University's preferred GenAI tool because: Any data inputted is fully protected. Your chats are never used for training or human review by Google. You must make sure you are logged into your University Google account to ensure this. Google Workspace is the University’s primary collaboration and communication platform. So it makes sense to use a GenAI tool that is part of the same ecosystem. It offers a simple path to upgrade to full GenAI capabilities within Google as and when required.
I am a non-native speaker, can I use AI to help me read and write research reports?
There are some ways in which AI can help non-native speakers read and write research articles from providing summaries and translations, through outlining and drafting text, to proof-reading and rewriting text. It can also be used to automatically rewrite texts in another genre, e.g. produce a lay summary of an article. Researchers, however, need to remain mindful that automatically generated texts may contain inaccuracies and translations may lack nuance as well as contain inaccuracies and fabricated material. When using AI tools in ways that go beyond spelling and grammar checking that might impact on meaning, researchers are reminded that they remain responsible for their writing and should check any AI output carefully. This requires active engagement with the outputs generated by AI tools.
If you are a PGR, you must abide by the rules set out in the Policy on Transparency of Authorship in PGR Programmes.
Note: GenAI should not be used where a research-related skill, such as lay communication or summarisation is being assessed as part of a qualification unless permission has been explicitly granted by the task-setter.
I am planning for REF and I would like to use AI to write elements of my submission - can I do this?
Researchers can refer to Section 5 on peer review and evaluation with regards to the use of AI in peer review and evaluation relating to the REF. Using GenAI tools to draft elements of REF submissions is not recommended.
Is there a way to log my AI prompts?
There is no policy on the need to log AI prompts, but in line with other policies on ethical practice, we recommend that it is good practice to either export prompts from the tool of use or use your own system to log your interactions with AI. For instance, an AI Prompt Record has been developed which you may wish to use.
How do I cite the use of AI tools in my work?
It is important to acknowledge how you have used AI in your work, which helps to explain the extent to which you have used generative AI tools and in what context. This also ensures that you are better able to avoid academic misconduct in your work. Please visit this link to find out how to cite AI.
I have just seen a new tool to help me with my research called Deep Research. Should I use it?
Inevitably, there will always be a new tool! As more tools emerge such as Open AI's Deep Research tool, we should ask ourselves, are we leveraging these tools to enhance our research capabilities, or are we allowing them to replace the cognitive work that defines what we do? Researchers should approach AI with a principled mindset, remaining alert to the erosion of critical research skills and valuable intellectual work. Faculty and supervisors have a role to play in guiding students and colleagues to use these tools with discernment, ensuring that they remain aids rather than replacements in the research process.
English is not my first language. Can I write up my research for publication in my first language and use AI to translate to English?
Researchers should exercise caution when considering the use of AI translation tools to write up their research. While such tools may provide an initial draft, they often lack precision and should be carefully reviewed for accuracy. From a pedagogical perspective, relying solely on AI for translation is not advisable. These tools can perpetuate the misconception that language functions merely as a mechanical coding of thoughts. In reality, language is intricately tied to the way we think and distinctions between languages should be considered and respected. Researchers should make clear the use of AI in this regard and comply with the policy of publication outlets.
If the publication will form part of your thesis for a PGR programme it is not acceptable to use AI translation tools (see the Policy on Transparency of Authorship in PGR Programmes).
What training is available?
See section on AI literacy for updates on training and development.
Am I permitted to use AI for preparing my journal article?
Researchers should refer to writing and dissemination guidelines provided by publishers and funders, which largely permit AI for drafting and tone adjustment but prohibit AI substantive content creation and authorship. As an example, AI tools could help researchers convert a publication into a lay summary. However, being able to summarise research is a skill that we want all researchers to develop. AI might be used as an assistive tool to structure content, as an inspiration, or for summarisation, but the researcher bears responsibility for the end product. Researchers are reminded that generative AI can offer very convincing illusions of human-like understanding, however, researchers need to check the reliability of AI generated text.
How do I reference AI?
This will be dependent upon the output. For referencing styles visit https://subjectguides.york.ac.uk/referencing-style-guides/generative-ai
Where can I find approved and preferred software?
IT provides a list of preferred AI tools. For tools outside this list, consult with IT or your ethics committees, as needed, whether it is appropriate to use them for the tasks you need.
Where can I see more examples of support on the subject of AI in research?
Digital Creativity: A practical Guide - Using AI generation tools
Guidance on the Use of Generative Artificial Intelligence in PGR programmes
Code of Practice and Principles for Good Ethical Governance
Code of Practice on Research Integrity
Research Misconduct Policy and Procedure
Copyright for Research: Library Practical Guide
What if my research requires responsible AI use outside these guidelines?
Researchers looking to use AI beyond these guidelines should submit a request to ethics or IT to ensure compliance.
I need to access my documents using text to speech for accessibility reasons. I think it may use AI. Can I use it?
The University has an approved text to speech solution. Follow the information classification policy and visit https://subjectguides.york.ac.uk/learning-tech/read-write-software
How can AI interact with data?
Researchers should make themselves aware of how AI tools might be interacting with their data. For example, data could be uploaded directly to an AI tool as an attached file, shared via the chat function, or an AI co-pilot might be able to read and analyse a spreadsheet. How an AI tool uses or retains that data can also vary.
Who can I contact about this document?
The AI research working group is chaired by PVC Research. The academic lead is Dr Jennifer Chubb, Department of Sociology and Responsible AI Lead for SAINTS.
Chubb, J., Cowling, P., & Reed, D. (2022). Speeding up to keep up: exploring the use of AI in the research process. AI & society, 37(4), 1439-1457.
Leslie, D. Does the sun rise for ChatGPT? Scientific discovery in the age of generative AI. AI Ethics (2023). https://doi.org/10.1007/s43681-023-00315-3
Krenn, M., Pollice, R., Guo, S. Y., Aldeghi, M., Cervera-Lierta, A., Friederich, P., ... & Aspuru-Guzik, A. (2022). On scientific understanding with artificial intelligence. Nature Reviews Physics, 4(12), 761-769.
Bockting CL, van Dis EAM, van Rooij R, Zuidema W, Bollen J. Living guidelines for generative AI - why scientists should oversee its use. Nature. 2023 Oct;622(7984):693-696. https://doi.org/10.1038/d41586-023-03266-1
Exploring the Intersection of AI and HCI at CHI: Insights for Legal Research https://medium.com/tr-labs-ml-engineering-blog/exploring-the-intersection-of-ai-and-hci-at-chi-insights-for-legal-research-07639ea5228e
Existing examples of Generative AI use in research guidelines
AI for Africa: Use Cases Delivering Impact
Generative AI Framework for HMG
Living Guidelines for Responsible Use of Generative AI in Research
AI for Researchers - University of Glasgow
Cancer Research UK Guidance for Researchers on the Use of Generative AI
AI Use in Research - University of Leeds
Responsible Use of Artificial Intelligence in the Research Process - Aalto University
Artificial Intelligence - Arizona State University
Generative AI Principles - ACM
Artificial Intelligence - TEQSA
Ethics Evaluation - Deakin University
Guidelines for Using ChatGPT and Other Generative AI Tools - Harvard
Generative AI Paper 2023 - STM
Policy for Acceptable Use of Large Language Models - ISCB
Guidelines for Use of Generative AI - Maastricht University
Research Values Framework - Science Europe
Guidance on Generative AI in Education and Research - UNESCO
Guidelines for Use of Generative AI - University of Ljubljana
Guidance on the Use of Generative Artificial Intelligence - University of Toronto
Contact us
Policy, Integrity and Performance
policy-integrity-performance
Dr Jenn Chubb
Academic Lead
jennifer.chubb
Department of Sociology