Category Archives: AI

Building Research Capacity with AI

Over 25 years ago, the “10/90 gap” was used to illustrate the global imbalance in health research. Only 10% of global research benefited the regions where 90% of preventable deaths occurred. Since then, efforts to improve research capacity in low- and middle-income countries (LMICs)—where 90% of avoidable deaths occurred—have made important gains; nonetheless, significant challenges remain. A quarter of a century later, there are still too few well-trained researchers in LMICs, and their research infrastructure and governance are also inadequate. The scope of the problem increased dramatically in 2025 when governments cut North American and European overseas development assistance (ODA, i.e., foreign aid) precipitously. That aid—however inadequate—supported improvements in research capacity.

Traditional approaches to improving research capacity, such as training workshops and degree scholarship programs, have gone some way to address the expertise challenge. However, they fall short because they are not scalable. The relatively recent introduction of massive open online courses (MOOCs), such as TDR/WHO’s MOOCs in implementation research, goes a long way to overcoming that scalability problem—at least in instruction-based learning. Nonetheless, for many LMIC researchers, major bottlenecks remain because of poor or limited access to mentorship, one-off and quick advice, bespoke training, research assistance, and inter- and intra-disciplinary collaboration. The scalability problem can leave them at a persistent disadvantage compared to their high-income country counterparts. Research is not done well from isolation and ignorance.

The rise of large language model artificial intelligence (LLM-AIs) such as ChatGPT, Mistral, Gemini, Claude, and DeepSeek offers an unprecedented opportunity…and some additional risks. LLM-AIs are advanced AI models trained on vast amounts of text data to understand and generate human-like language. They are flexible, multilingual, and always available (24/7), offering researchers in LMICs immediate access to knowledge and assistance. If used correctly, LLMs could revolutionise approaches to building research capacity and democratise access to skills, knowledge, and global scientific discourse. Many online educational providers already integrate LLM-AIs into their instructional pipelines as tutors and coaches.

Unfortunately, LMICs risk further entrenching or increasing the 10/90 gap if they cannot take advantage of the benefits of LLM-AIs.

AI as a game changer

Researchers in resource-limited settings can access an always-on, massively scalable assistant for the first time. By massively scalable, every researcher could have one or more 24/7, decent research assistants for a monthly subscription of less than $20. They offer scalability and flexibility that traditional human research assistants cannot (and should not) match. However, they are not human and may not fully replicate a human research assistant’s nuanced understanding and critical thinking—and they are certainly less fun to have a cup of coffee with. Furthermore, the effectiveness of LLM-AIs depends on the sophistication of the user, the task complexity and the quality of input the user provides.

I read a recent post on LinkedIn by a UCLA professor decrying the inadequacies of LLM-AIs. However, a quick read of the post revealed that the professor had no idea how to engage appropriately with the technology.

Unfortunately, like all research assistants, senior researchers, and professors, LLM-AIs can be wrong. Like all tools, one needs to learn how to use them with sophistication.

In spite of any inadequacies, LLM-AIs can remove barriers to research participation by offering tutoring on complex concepts, assisting with literature reviews and data analysis, and supporting the writing and editing of manuscripts and grant proposals.

Reid Hoffman, the AI entrepreneur, described on a podcast how he used LLM-AIs to learn about complex ideas. He would upload a research paper onto the platform and ask, “Explain this paper as if to a 12-year-old”. Hoffman could then “chat” with the LLM-AI about the paper at that level. Once comfortable with the concepts, he would ask the LLM-AI to “explain this paper as if to a high school senior”. He could use the LLM-AI as a personal tutor by iterating-up in age and sophistication.

Researchers can also use the LLM-AIs to support the preparation of scientific papers. This is happening already because an explosion of generically dull (and sometimes fraudulent) scientific papers is hitting the market. This explosion has delighted the publishing houses and created existential ennui among the researchers. The problem is not the LLM-AIs—it is in their utilisation, and it will take time for the paper production cycle to settle.

While access to many LLMs requires a monthly subscription, some LLM-AIs, like DeepSeek, significantly lower costs and accessibility barriers by distributing “open weights models”. Researchers can download these open weights models freely and put them on personal or university computer infrastructure without paying a monthly subscription. They make AI-powered research assistance viable for most LMIC research settings, and universities and research institutes can potentially lower the costs further.

LLM-AIs allow researchers in LMICs to become less dependent on high-income countries for training and mentorship, shifting the balance towards scientific self-sufficiency. AI-powered tools could accelerate the development of a new generation of LMIC researchers, fostering homegrown expertise and leadership in relevant global science. They are no longer constrained by the curriculum and interests of high-income countries and can develop contextually relevant research expertise.

The Double-Edged Sword

Despite its positive potential, the entry of LLM-AIs into the research world could have significant downsides. Without careful implementation, existing inequalities could be exacerbated rather than alleviated. High-income countries are already harnessing LLM-AIs at scale, integrating them into research institutions, project pipelines, training, and funding systems. LMICs, lacking the same level of investment and infrastructure, risk being left behind—again. The AI revolution could widen the research gap rather than close it, entrenching the divide between well-resourced and under-resourced institutions.

There is also a danger in how researchers use LLM-AIs. They are the cheapest research assistants ever created, which raises a troubling question: will senior researchers begin to rely on AI to replace the need for training junior scientists? Suppose an LLM-AI can summarise the literature, draft proposals, and assist in the analysis. In that case, there is a real risk that senior researchers will neglect mentorship, training and hands-on learning. Instead of empowering a new generation of LMIC researchers, LLM-AIs could be used as a crutch to maintain existing hierarchies. If institutions see the LLM-AIs as a shortcut to productivity rather than an investment in building research capacity, it could stall the development of genuine human expertise.

Compounding these risks, AI is fallible. LLM-AIs can “hallucinate”, generating false information with complete confidence. They always write with confidence. I’ve never seen one write, “I think this is the answer, but I could be wrong”. They can fabricate references, misinterpret scientific data, and reflect biases embedded in their training data. If used uncritically, they could propagate misinformation and skew research findings.

The challenge of bias is not to be underestimated. LLM-AIs are trained on the corpus of material currently available on the web, reflecting all the biases of the web–who creates the content, what content they create, etc.

Furthermore, while tools like DeepSeek reduce cost barriers, commercial AI models still pose a financial challenge. LMIC institutions will need to negotiate sustainable access to AI tools or risk remaining locked out of their benefits—particularly of the leading edge models. The worst outcome would be a scenario where HICs use AI to accelerate their research dominance while LMICs struggle to afford the very tools that could democratise access.

A Strategic Approach

To ensure LLM-AIs build rather than undermine research capacity in LMICs, they must be integrated strategically and equitably. Training researchers and students in AI literacy is paramount. Knowing how to ask the right questions, validate AI outputs, and integrate results into research workflows is essential. This is not a difficult task, but it takes time and effort, like all learning. The LLM-AIs can help with the task—effectively bootstrapping the learning curve.

Rather than replacing traditional research capacity building, LLM-AIs should be embedded into existing frameworks. MOOCs, mentorship programs, and research fellowships should incorporate LLM-AI-based tutoring, iterative feedback, and language support to enhance—not replace—human mentorship. The focus should be on areas where LLM-AI can offer the greatest immediate impact, such as brainstorming, editing, grant writing support, statistical assistance, and multilingual research dissemination.

Institutions in LMICs should also push for local, ethical LLM-AI development that considers regional needs. This push is easier said than done, particularly in a world of fracturing multilateralism. However, appropriately managed, LLM-AI models can be adapted to recognise and integrate local research priorities rather than merely reinforcing an existing scientific discourse. The fact that a research question is of no interest in high-income countries does not mean it is not critically urgent in an LMIC context.

Finally, securing affordable and sustainable access to AI tools will be essential. Governments, universities, and research institutions must lobby for cost-effective AI licensing models or explore open-source alternatives to prevent another digital divide. Disunited lobbying efforts are weak, but together, across national boundaries, they could have significant power.

An Equity Tipping Point

The LLM-AI revolution is a key juncture for building research capacity in LMICs. Harnessed correctly, LLM-AIs could break down long-standing barriers to participation in science, allowing LMIC researchers to compete on (a more) equal footing. The rise of models like DeepSeek suggests a future where AI is not necessarily a privilege of the few but a democratised resource for the many.

Fair access will not happen automatically. Without deliberate, ethical, and strategic intervention, LLM-AIs could reinforce existing research hierarchies. The key to harvesting the benefits of the technology lies in training researchers, integrating LLM-AIs into programs to build research capacity and securing equitable access to the tools. Done well, LLM-AIs could be a transformative force, not just in scaling research capacity but in redefining who gets to lead global scientific discovery.

LLM-AIs offer an enormous opportunity. They could either empower LMIC researchers to chart their own scientific futures, or they could become another tool to push them further behind.


Acknowledgment: This blog builds upon insights from a draft concept note developed by me (Daniel D. Reidpath), Lucas Sempe, and Luciana Brondi from the Institute for Global Health and Development (Queen Margaret University, Edinburgh), and Anna Thorson from the TDR Research Capacity Strengthening Unit (WHO, Geneva). Our work on AI-driven research capacity strengthening in LMICs informed much of the discussion presented here.

The original draft concept note is accessible here.

A Christmas Story

In the last year of the reign of Biden, there was a ruler in Judea named Benyamin. He was a man of great cunning and greater cruelty.

In those days, Judea, though powerful, was a vassal state. Its strength was created through alliances with distant empires. It wielded its might with a fierce arm and harboured a deep hatred for its neighbors. Benyamin, fearing the loss of his power, sought to destroy the Philistines on that small strip of land called Gaza, and claim it for himself.

For over four hundred and forty days and nights, he commanded his armies to bomb their towns and villages, reducing them to rubble. The Philistines were corralled, trapped within walls and wire, with no escape. Benyamin promised them safety in Rafah and bombed the people there. He offered refuge in Jabalia, and bombed the people there.

In Gaza, there was no safety and there was no food.

Even as leaders wept for the Philistines, they sold weapons to Benyamin and lent him money to prosecute his war. Thus, the world watched in silence as the Philistines endured great suffering. Their cries rose up to heaven, seemingly unanswered.

And so it came to pass, in the last days of the last year of Biden, there was a humble Philistine named Yusouf born of the family of Dawoud. Before the war, Yusouf had been a mechanic. He worked hard each day fixing tires and carburetors, changing break-pads and exhaust systems. And at the end of each day, he would return home to his young wife, Mariam. The same Mariam, you may have heard of her, who was known for her inexhaustable cheerfulness.

That was before the war. Now Mariam was gaunt and tired, and heavy with child.

On the night of the winter solstice, in a dream, a messenger came to Yusouf. “Be not afraid, Yusouf”, the messenger said. “Be not afraid for yourself, for the wife you love so very much, or for your son—who will change the world. What will be, will be and was always meant to be”. Yusouf was troubled by this dream, and found himself torn between wonder, happiness, and fear. Mariam asked him why he looked troubled, but he said nothing and kept his own counsel.

The following night the same messenger visited Mariam in her dreams. Mariam was neither afraid nor troubled. The next morning she had a smile on her face that Yusouf had not seen for so long he had almost forgotten it. “It is time, Yusouf”, she said. “We have to go to the hospital in Beit Lahiya.”

Yusouf was troubled. Long ago he had learned to trust Mariam, but his motorbike had no fuel and it was a long walk. Too far for Mariam, and they were bombing Beit Lahiya. He remembered the words of the messenger in his dreams and he went from neighbour to neighbour. A teaspoon of fuel here, half a cup there. No one demanded payment. If they had any fuel, no one refused him. Having little, they shared what they had. It was the small act of kindness that binds communities. Yusouf wept for their generosity.

When he had gathered enough fuel, he had Mariam climb on the bike. Shadiah, the old sweet seller who had not made a sweet in over a year and could barely remember the smell of honey or rosewater, helped her onto the back.

Yusouf rode carefully. He weaved slowly around potholes and navigated bumps. In spite of his care, he could feel Mariam tense and grip him tighter. And then the motorbike stopped. A last gasping jerk and silence. The fuel was spent.

The late afternoon air was cooling as he helped Mariam walk towards the hospital. When they arrived at the gate, a porter stopped, them. “They’re evacuating the hospital. You can’t go in”, the porter told them. Yusouf begged. “My wife, she is going to give birth,” he told the porter—who could plainly see this for himself. The porter looked at Mariam and took pity. “You can’t go in, but there is a small community clinic around the corner. It was bombed recently, but some of it, a room or two, is still standing. I’ll send a midwife.”

Yusouf gently guided Mariam to the clinic. He found an old mattress on a broken gurney and a blanket. He lay it on the floor and settled Mariam.

If there had been a midwife—if she had ever arrived… if she had ever got the porter’s message—she would have been eager to retell the story of the birth. Sharing a coffee, with a date-filled siwa, she would have painted the picture. Mariam’s face was one of grace. Yusouf anxiously held her hand. The baby came quickly, with a minimum of fuss, as if Mariam was having her fifth and not her first.

Yusouf quickly scooped up the baby as it began to vocalise it’s unhappiness with the shock of a cold Gaza night. He cut the cord crudely but effectively with his pocket knife. And it was only as he was passing the the baby to Mariam that he looked confused. He did not have the son he was promised, he had a daughter. The moment was so fleeting that quantum physicists would have struggled to measure the breadth of time, and Yusouf smiled at the messenger’s joke.

Because there was no midwife to witness this moment, we need to account for the witnesses who were present. There was a mangy dog with a limp looking for warmth. He watched patiently and, once the birth was completed, he found a place at Mariam’s feet. There were three rats that crawled out of the rubble looking for scraps. They gave a hopeful sniff of the night air and sat respectfully and companionably on a broken chair. As soon as the moment passed, they disappeared into the crevices afforded by broken brick and torn concrete. Finally, there was an unremarkable cat. In comfortable fellowship, they all watch the moment of birth knowing that, tomorrow or the next day, they were mortal enemies, but tonight there was peace.

“Nasrin”, Yousuf whispered in Mariam’s ear as he kissed her forehead. “We’ll call her Nasrin.” The wild rose that grows and conquers impossible places.

There was a photo journalist called Weissman, who heard from the porter that was a very pregnant woman at the clinic. “She’s about to pop”, the porter said. Weissman hurried to the bombed out clinic so that he could bear witness to this miracle in the midst of war.

He missed the birth. And when he arrived, he did not announce his presence. It seemed rude. An intrusion on a very private moment. It did not, however, stop him from taking photos for AAP.

He later shared those images with the world. Yusouf lay on the gurney mattress, propped against a half destroyed wall. Mariam was lying against him, exhausted, eyes closed, covered in a dirty blanket. The baby Nasrin was feeding quietly, just the top of her head with a shock of improbably thick dark hair peeking out. Yousuf stared through the broken roof at the stars in heaven. The blackness of a world without electricity made resplendent. He looked up with wonderment and contentment on his face. He was blessed, he thought. No. They were blessed. The messenger was right.

As Weissman picked his way in the dark towards the hospital gate, where he had last seen the porter, he shared the same hope that he had seen on Yusouf’s face. New life can change things.

The night sky lit up, brightening his path to the hospital. He turned back and was awed by a red flare descending slowly over the remains of the clinic as if announcing a new beginning to the world. A chance for something different was born here today.

The explosion shook the ground and Weissman fell. Cement and brick dust from where the clinic had stood rose sharply in to the air. An avalanche of dust raced towards him.

UKRI go its A.I. policy half right

UKRI AI policy: Authors on the left. Assessors on the right

UKRI AI policy: Authors on the left. Assessors on the right (image generated by DALL.E)

When UKRI released its policy on using generative artificial intelligence (A.I.) in funding applications this September, I found myself nodding until I wasn’t. Like many in the research community, I’ve been watching the integration of A.I. tools into academic work with excitement and trepidation. In contrast, UKRI’s approach is a puzzling mix of Byzantine architecture and modern chic.

The modern chic, the half they got right, is on using A.I. in research proposal development. By adopting what amounts to a “don’t ask, don’t tell” policy, they have side-stepped endless debates that swirl about university circles. Do you want to use an A.I. to help structure your proposal? Go ahead. Do you prefer to use it for brainstorming or polishing your prose? That’s fine, too. Maybe you like to write your proposal on blank sheets of paper using an HB pencil. You’re a responsible adult—we’ll trust you, and please don’t tell us about it.

The approach is sensible. It recognises A.I. as just one of the many tools in the researcher’s arsenal. It is no different in principle from grammar-checkers or reference managers. UKRI has avoided creating artificial distinctions between AI-assisted work and “human work” by not requiring disclosure. Such a distinction also becomes increasingly meaningless as A.I. tools integrate into our daily workflows, often completely unknown to us.

Now let’s turn to the Byzantine—the half UKRI got wrong—the part dealing with assessors of grant proposals. And here, UKRI seems to have lost its nerve. The complete prohibition on using A.I. by assessors feels like a policy from a different era—some time “Before ChatGPT” (B.C.) was released in November 2022. The B.C. policy fails to recognise the enormous potential of A.I. to support and improve human assessors’ judgment.

You’re a senior researcher who’s agreed to review for UKRI. You have just submitted a proposal using an A.I. to clean, polish and improve the work. As an assessor, you are now juggling multiple complex proposals, each crossing traditional disciplinary boundaries (which is increasingly regarded as a positive). You’re probably doing this alongside your day job because that’s how senior researchers work. Wouldn’t it be helpful to have an A.I. assistant to organise key points, flag potential concerns, help clarify technical concepts outside your immediate expertise, act as a sounding board, or provide an intelligent search of the text?

The current policy says no. Assessors must perform every aspect of the review manually, potentially reducing the time they can spend on a deep evaluation of the proposal. The restriction becomes particularly problematic when considering international reviewers, especially those from the Global South. Many brilliant researchers who could offer valuable perspectives might struggle with English as a second language and miss some nuance without support. A.I. could help bridge this gap, but the policy forbids it.

The dual-use policy leads to an ironic situation. Applicants can use A.I. to write their proposals, but assessors can’t use it to support the evaluation of those proposals. It is like allowing Formula 1 teams to use bleeding-edge technology to design their racing cars while insisting that race officials use an hourglass and the naked eye to monitor the race.

Strategically, the situation worries me. Research funding is a global enterprise; other funding bodies are unlikely to maintain such a conservative stance for long. As other funders integrate A.I. into their assessment processes, they will develop best-practice approaches and more efficient workflows. UKRI will fall behind. This could affect the quality of assessments and UKRI’s ability to attract busy reviewers. Why would a busy senior researcher review for UKRI when other funders value their reviewers’ time and encourage efficiency and quality?

There is a path forward. UKRI could maintain its thoughtful approach to applicants while developing more nuanced guidelines for assessors. One approach would be a policy that clearly outlines appropriate A.I. use cases at different stages of assessment, from initial review to technical clarification to quality control. By adding transparency requirements, proper training, and regular policy reviews, UKRI could lead the way with approaches that both protect research integrity and embrace innovation.

If UKRI is nervous, they could start with a pilot program. Evaluate the impact of AI-assisted assessment. Compare it to a traditional approach. This would provide evidence-based insights for policy development while demonstrating leadership in research governance and funding.

The current policy feels half-baked. UKRI has shown they can craft sophisticated policy around A.I. use. The approach to applicants proves this. They need to extend that same thoughtfulness to the assessment process. The goal is not to use A.I. to replace human judgment but to enhance it. It would allow assessors to focus their expertise where it matters most.

This is about more than efficiency and keeping up with technology. It’s about creating the best possible system for identifying and supporting excellent research. If A.I. is a tool to support this process, we should celebrate. When we help assessors do their job more effectively, we help the entire research community.

The research landscape is changing rapidly. UKRI has taken an important first step in allowing A.I. to support the writing of funding grant applications. Now it’s time for the next one—using A.I. to support funding grant evaluation.

Harmonising Climate Protest with AI

Protest singer on an empty street corner (DALL.E created)

Protest songs have a rich and powerful history. They bring attention to issues and catalyse social change. From Bob Dylan’s poignant ballads to John Lennon’s “Give Peace a Chance“, music has been a potent force in shaping public opinion and spurring political action.

Most of us will never be a Dylan or a Lennon. I can barely hold a tune in the shower, and the only protests I ever hear are from my partner begging me to stop singing.

When it comes to the existential threat of climate change, there has been a surprising dearth of anthems that capture the zeitgeist and propel politicians forward. Given the urgency and scale of the crisis, one might expect a groundswell of musical activism akin to the protest songs that defined the civil rights, anti-war, and environmental movements of the 1960s and 70s. While there have been some notable examples, climate change hasn’t spawned a recognisable musical rallying cry that has permeated public consciousness and political discourse in quite the same way.

We are not missing information about the extent of the threat. Climate change has been a topic of discussion among scientists for at least four decades, and the evidence of its devastating impacts has been well-known for at least two decades. Despite this, the world’s response has been inadequate. Major carbon emitters have talked about the issue and have taken some actions, but these have been too limited, aimed at protecting a political base, and have not addressed issues of equity. The result? Global temperatures continue to rise, and the threat of climate change looms larger than ever.

Where are those protest songs that can galvanize the public and demand action from our leaders? Most of us lack the musical talent to create such anthems. We do not know a bass clef from a semi-quaver or Ska from a xylophone, but what if there was a way for non-musicians to give voice to their fury?

Enter AI.

Large language models such as Mistral, Claude, or ChatGPT can help write a song, and AI music generators like Suno can help voice it and set it to music. By combining these tools, anyone can create music. With luck, it may inspire, educate, and motivate people to take action. While these tools are not yet as good as good musicians, good musicians are relatively rare and they’re not necessarily interested in singing your song.

To illustrate the idea, I generated a couple of modest examples of climate protest songs using two completely different musical styles. The first, “Climate change love” is a dark scat jazz satire of what is (or may be) to come. “Le futur proche” (the near future) is a “rock anthem” on the short-sightedness of the upcoming UN Summit of the Future that completely misses the opportunity to consider what happens if we fail.

I know nothing about composing jazz or rock, but AI gives me a touch point to an expressive medium that is otherwise completely out of reach. It can democratise the protest song and give voice to a tin-eared muser. My two examples will not create a groundswell of protest or spin the earth off its axis (to paraphrase one of the songs). Each one took about 15 minutes to generate from lyrics to the final product.

My partner tells me they are repetitive and derivative, and I should not be as impressed as I am. She’s probably right! But the songs are infinitely better than anything I could produce on my own. You also can’t expect too much from the level of minimalist effort I expended. Hopefully, smarter and more talented people will be inspired to explore this medium and maybe spend an hour or two creating the song. Voice your protest in afrobeat rockabilly, sitar southern rock, or lo-fi Pacific reggae.

AI protest songs may not be perfect, but if Bob (Dylan or Marley) would like to contact me, perhaps we could collaborate on something that will shake the world.

In the meantime, let me leave you with Claude.ai ‘s lyrical take on the UN Summit of the Future …

Summit of the Future, planning for the peak
But what if we’re on the brink of a valley deep?
Climate’s getting hotter, world’s in decline.
Leaders need to wake up before we’re out of time!