Over 25 years ago, the “10/90 gap” was used to illustrate the global imbalance in health research. Only 10% of global research benefited the regions where 90% of preventable deaths occurred. Since then, efforts to improve research capacity in low- and middle-income countries (LMICs)—where 90% of avoidable deaths occurred—have made important gains; nonetheless, significant challenges remain. A quarter of a century later, there are still too few well-trained researchers in LMICs, and their research infrastructure and governance are also inadequate. The scope of the problem increased dramatically in 2025 when governments cut North American and European overseas development assistance (ODA, i.e., foreign aid) precipitously. That aid—however inadequate—supported improvements in research capacity.
Traditional approaches to improving research capacity, such as training workshops and degree scholarship programs, have gone some way to address the expertise challenge. However, they fall short because they are not scalable. The relatively recent introduction of massive open online courses (MOOCs), such as TDR/WHO’s MOOCs in implementation research, goes a long way to overcoming that scalability problem—at least in instruction-based learning. Nonetheless, for many LMIC researchers, major bottlenecks remain because of poor or limited access to mentorship, one-off and quick advice, bespoke training, research assistance, and inter- and intra-disciplinary collaboration. The scalability problem can leave them at a persistent disadvantage compared to their high-income country counterparts. Research is not done well from isolation and ignorance.
The rise of large language model artificial intelligence (LLM-AIs) such as ChatGPT, Mistral, Gemini, Claude, and DeepSeek offers an unprecedented opportunity…and some additional risks. LLM-AIs are advanced AI models trained on vast amounts of text data to understand and generate human-like language. They are flexible, multilingual, and always available (24/7), offering researchers in LMICs immediate access to knowledge and assistance. If used correctly, LLMs could revolutionise approaches to building research capacity and democratise access to skills, knowledge, and global scientific discourse. Many online educational providers already integrate LLM-AIs into their instructional pipelines as tutors and coaches.
Unfortunately, LMICs risk further entrenching or increasing the 10/90 gap if they cannot take advantage of the benefits of LLM-AIs.
AI as a game changer
Researchers in resource-limited settings can access an always-on, massively scalable assistant for the first time. By massively scalable, every researcher could have one or more 24/7, decent research assistants for a monthly subscription of less than $20. They offer scalability and flexibility that traditional human research assistants cannot (and should not) match. However, they are not human and may not fully replicate a human research assistant’s nuanced understanding and critical thinking—and they are certainly less fun to have a cup of coffee with. Furthermore, the effectiveness of LLM-AIs depends on the sophistication of the user, the task complexity and the quality of input the user provides.
I read a recent post on LinkedIn by a UCLA professor decrying the inadequacies of LLM-AIs. However, a quick read of the post revealed that the professor had no idea how to engage appropriately with the technology.
Unfortunately, like all research assistants, senior researchers, and professors, LLM-AIs can be wrong. Like all tools, one needs to learn how to use them with sophistication.
In spite of any inadequacies, LLM-AIs can remove barriers to research participation by offering tutoring on complex concepts, assisting with literature reviews and data analysis, and supporting the writing and editing of manuscripts and grant proposals.
Reid Hoffman, the AI entrepreneur, described on a podcast how he used LLM-AIs to learn about complex ideas. He would upload a research paper onto the platform and ask, “Explain this paper as if to a 12-year-old”. Hoffman could then “chat” with the LLM-AI about the paper at that level. Once comfortable with the concepts, he would ask the LLM-AI to “explain this paper as if to a high school senior”. He could use the LLM-AI as a personal tutor by iterating-up in age and sophistication.
Researchers can also use the LLM-AIs to support the preparation of scientific papers. This is happening already because an explosion of generically dull (and sometimes fraudulent) scientific papers is hitting the market. This explosion has delighted the publishing houses and created existential ennui among the researchers. The problem is not the LLM-AIs—it is in their utilisation, and it will take time for the paper production cycle to settle.
While access to many LLMs requires a monthly subscription, some LLM-AIs, like DeepSeek, significantly lower costs and accessibility barriers by distributing “open weights models”. Researchers can download these open weights models freely and put them on personal or university computer infrastructure without paying a monthly subscription. They make AI-powered research assistance viable for most LMIC research settings, and universities and research institutes can potentially lower the costs further.
LLM-AIs allow researchers in LMICs to become less dependent on high-income countries for training and mentorship, shifting the balance towards scientific self-sufficiency. AI-powered tools could accelerate the development of a new generation of LMIC researchers, fostering homegrown expertise and leadership in relevant global science. They are no longer constrained by the curriculum and interests of high-income countries and can develop contextually relevant research expertise.
The Double-Edged Sword
Despite its positive potential, the entry of LLM-AIs into the research world could have significant downsides. Without careful implementation, existing inequalities could be exacerbated rather than alleviated. High-income countries are already harnessing LLM-AIs at scale, integrating them into research institutions, project pipelines, training, and funding systems. LMICs, lacking the same level of investment and infrastructure, risk being left behind—again. The AI revolution could widen the research gap rather than close it, entrenching the divide between well-resourced and under-resourced institutions.
There is also a danger in how researchers use LLM-AIs. They are the cheapest research assistants ever created, which raises a troubling question: will senior researchers begin to rely on AI to replace the need for training junior scientists? Suppose an LLM-AI can summarise the literature, draft proposals, and assist in the analysis. In that case, there is a real risk that senior researchers will neglect mentorship, training and hands-on learning. Instead of empowering a new generation of LMIC researchers, LLM-AIs could be used as a crutch to maintain existing hierarchies. If institutions see the LLM-AIs as a shortcut to productivity rather than an investment in building research capacity, it could stall the development of genuine human expertise.
Compounding these risks, AI is fallible. LLM-AIs can “hallucinate”, generating false information with complete confidence. They always write with confidence. I’ve never seen one write, “I think this is the answer, but I could be wrong”. They can fabricate references, misinterpret scientific data, and reflect biases embedded in their training data. If used uncritically, they could propagate misinformation and skew research findings.
The challenge of bias is not to be underestimated. LLM-AIs are trained on the corpus of material currently available on the web, reflecting all the biases of the web–who creates the content, what content they create, etc.
Furthermore, while tools like DeepSeek reduce cost barriers, commercial AI models still pose a financial challenge. LMIC institutions will need to negotiate sustainable access to AI tools or risk remaining locked out of their benefits—particularly of the leading edge models. The worst outcome would be a scenario where HICs use AI to accelerate their research dominance while LMICs struggle to afford the very tools that could democratise access.
A Strategic Approach
To ensure LLM-AIs build rather than undermine research capacity in LMICs, they must be integrated strategically and equitably. Training researchers and students in AI literacy is paramount. Knowing how to ask the right questions, validate AI outputs, and integrate results into research workflows is essential. This is not a difficult task, but it takes time and effort, like all learning. The LLM-AIs can help with the task—effectively bootstrapping the learning curve.
Rather than replacing traditional research capacity building, LLM-AIs should be embedded into existing frameworks. MOOCs, mentorship programs, and research fellowships should incorporate LLM-AI-based tutoring, iterative feedback, and language support to enhance—not replace—human mentorship. The focus should be on areas where LLM-AI can offer the greatest immediate impact, such as brainstorming, editing, grant writing support, statistical assistance, and multilingual research dissemination.
Institutions in LMICs should also push for local, ethical LLM-AI development that considers regional needs. This push is easier said than done, particularly in a world of fracturing multilateralism. However, appropriately managed, LLM-AI models can be adapted to recognise and integrate local research priorities rather than merely reinforcing an existing scientific discourse. The fact that a research question is of no interest in high-income countries does not mean it is not critically urgent in an LMIC context.
Finally, securing affordable and sustainable access to AI tools will be essential. Governments, universities, and research institutions must lobby for cost-effective AI licensing models or explore open-source alternatives to prevent another digital divide. Disunited lobbying efforts are weak, but together, across national boundaries, they could have significant power.
An Equity Tipping Point
The LLM-AI revolution is a key juncture for building research capacity in LMICs. Harnessed correctly, LLM-AIs could break down long-standing barriers to participation in science, allowing LMIC researchers to compete on (a more) equal footing. The rise of models like DeepSeek suggests a future where AI is not necessarily a privilege of the few but a democratised resource for the many.
Fair access will not happen automatically. Without deliberate, ethical, and strategic intervention, LLM-AIs could reinforce existing research hierarchies. The key to harvesting the benefits of the technology lies in training researchers, integrating LLM-AIs into programs to build research capacity and securing equitable access to the tools. Done well, LLM-AIs could be a transformative force, not just in scaling research capacity but in redefining who gets to lead global scientific discovery.
LLM-AIs offer an enormous opportunity. They could either empower LMIC researchers to chart their own scientific futures, or they could become another tool to push them further behind.
Acknowledgment: This blog builds upon insights from a draft concept note developed by me (Daniel D. Reidpath), Lucas Sempe, and Luciana Brondi from the Institute for Global Health and Development (Queen Margaret University, Edinburgh), and Anna Thorson from the TDR Research Capacity Strengthening Unit (WHO, Geneva). Our work on AI-driven research capacity strengthening in LMICs informed much of the discussion presented here.
The original draft concept note is accessible here.