When UKRI released its policy on using generative artificial intelligence (A.I.) in funding applications this September, I found myself nodding until I wasn’t. Like many in the research community, I’ve been watching the integration of A.I. tools into academic work with excitement and trepidation. In contrast, UKRI’s approach is a puzzling mix of Byzantine architecture and modern chic.
The modern chic, the half they got right, is on using A.I. in research proposal development. By adopting what amounts to a “don’t ask, don’t tell” policy, they have side-stepped endless debates that swirl about university circles. Do you want to use an A.I. to help structure your proposal? Go ahead. Do you prefer to use it for brainstorming or polishing your prose? That’s fine, too. Maybe you like to write your proposal on blank sheets of paper using an HB pencil. You’re a responsible adult—we’ll trust you, and please don’t tell us about it.
The approach is sensible. It recognises A.I. as just one of the many tools in the researcher’s arsenal. It is no different in principle from grammar-checkers or reference managers. UKRI has avoided creating artificial distinctions between AI-assisted work and “human work” by not requiring disclosure. Such a distinction also becomes increasingly meaningless as A.I. tools integrate into our daily workflows, often completely unknown to us.
Now let’s turn to the Byzantine—the half UKRI got wrong—the part dealing with assessors of grant proposals. And here, UKRI seems to have lost its nerve. The complete prohibition on using A.I. by assessors feels like a policy from a different era—some time “Before ChatGPT” (B.C.) was released in November 2022. The B.C. policy fails to recognise the enormous potential of A.I. to support and improve human assessors’ judgment.
You’re a senior researcher who’s agreed to review for UKRI. You have just submitted a proposal using an A.I. to clean, polish and improve the work. As an assessor, you are now juggling multiple complex proposals, each crossing traditional disciplinary boundaries (which is increasingly regarded as a positive). You’re probably doing this alongside your day job because that’s how senior researchers work. Wouldn’t it be helpful to have an A.I. assistant to organise key points, flag potential concerns, help clarify technical concepts outside your immediate expertise, act as a sounding board, or provide an intelligent search of the text?
The current policy says no. Assessors must perform every aspect of the review manually, potentially reducing the time they can spend on a deep evaluation of the proposal. The restriction becomes particularly problematic when considering international reviewers, especially those from the Global South. Many brilliant researchers who could offer valuable perspectives might struggle with English as a second language and miss some nuance without support. A.I. could help bridge this gap, but the policy forbids it.
The dual-use policy leads to an ironic situation. Applicants can use A.I. to write their proposals, but assessors can’t use it to support the evaluation of those proposals. It is like allowing Formula 1 teams to use bleeding-edge technology to design their racing cars while insisting that race officials use an hourglass and the naked eye to monitor the race.
Strategically, the situation worries me. Research funding is a global enterprise; other funding bodies are unlikely to maintain such a conservative stance for long. As other funders integrate A.I. into their assessment processes, they will develop best-practice approaches and more efficient workflows. UKRI will fall behind. This could affect the quality of assessments and UKRI’s ability to attract busy reviewers. Why would a busy senior researcher review for UKRI when other funders value their reviewers’ time and encourage efficiency and quality?
There is a path forward. UKRI could maintain its thoughtful approach to applicants while developing more nuanced guidelines for assessors. One approach would be a policy that clearly outlines appropriate A.I. use cases at different stages of assessment, from initial review to technical clarification to quality control. By adding transparency requirements, proper training, and regular policy reviews, UKRI could lead the way with approaches that both protect research integrity and embrace innovation.
If UKRI is nervous, they could start with a pilot program. Evaluate the impact of AI-assisted assessment. Compare it to a traditional approach. This would provide evidence-based insights for policy development while demonstrating leadership in research governance and funding.
The current policy feels half-baked. UKRI has shown they can craft sophisticated policy around A.I. use. The approach to applicants proves this. They need to extend that same thoughtfulness to the assessment process. The goal is not to use A.I. to replace human judgment but to enhance it. It would allow assessors to focus their expertise where it matters most.
This is about more than efficiency and keeping up with technology. It’s about creating the best possible system for identifying and supporting excellent research. If A.I. is a tool to support this process, we should celebrate. When we help assessors do their job more effectively, we help the entire research community.
The research landscape is changing rapidly. UKRI has taken an important first step in allowing A.I. to support the writing of funding grant applications. Now it’s time for the next one—using A.I. to support funding grant evaluation.