Citation Panic

Fake citation panic has arrived. A paper was recently published in the Lancet that audited 2.5 million biomedical papers for fabricated citations; i.e., “references whose claimed titles correspond to no existing publication”. The headline finding was that 1 in 277 papers contained fake citations.

The response to the paper’s publication has been varied but at times close to hysterical, including this news article in Nature: Surge in fake citations uncovered by audit of 2.5 million biomedical-science papers.

The Lancet paper itself has a few problems in the way it presents the results, finally settling on the most heart-stopping number (#Affected_Papers/#Papers), which was 1 paper in every 227. An “affected paper” is one with at least a single fabricated citation. If you look at the data in terms of total references, fake citations are extremely low: 4046/97.1 million≈0.0042%.

The seriousness of the problem also needs to be put against the backdrop of general error rates in citations pre-AI. The rate was around 15% with 9% being major errors “in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion”. In other words, citation unreliability predates LLMs and occurs at rates vastly exceeding outright fabrication. Assuming the fabricated citations found in the Lancet paper are attributable to AI, they have added a tiny quanta to a substantial existing problem.

As academic researchers, we also need to be honest about how we use the literature in the development of a paper, and the extent to which a fake citation is actually doing load-bearing epistemic work. Remember, fake citations that assert complete nonsense are valueless to the author because they act as flags of authorial ineptitude. A “good” fake citation is one that says nothing too controversial and supports the general thrust of normal science.

There are five potential, non-load bearing, reasons for adding citations:

1. Credentialling. Signalling to reviewers and readers that you belong to the field, have done the reading, know the players. It is a display of club membership.

2. Tribute. Citing the people who might review your paper, your supervisor, your allies. It is a social currency. Although it happens less now, it was relatively common practice for anonymous reviewers to suggest a citation to be included in a resubmission: i.e., you forgot to cite ME!

3. Defensive armour. Pre-empting the reviewer who asks “but what about X?” You cite X, even cursorily, to close the objection.

4. Territorial marking. Establishing your position in an intellectual lineage. “I stand in this tradition, not that one.”

5. Apparent support. Your actual use case. Finding something that gestures in the direction of your claim.

All of these reasons for citation have important sociological roles in the production of science. Many could also support a fake citation, and none of them (with the possible exception of “apparent support”) do substantial epistemic load-bearing. This is plausibly why the fake (inappropriate) citation rates pre-AI had made so few waves in the academic world. The citations are intellectual grease that move the paper forward. Failure to sprinkle citations throughout would condemn a paper to the bin–never even getting further than the editor’s desk.

The people with most to fear from fake citations are perhaps bibliometricians, the researchers who treat citation counts as meaningful measures of scientific quality and impact. The fact that they readily get through the peer-review process is a strong signal that they are not doing heavy work within the paper.

An examination of the Supplementary Material of the Lancet paper is also revealing. In Appendix 2 the authors provide “Illustrative Examples of Suspected Fabricated References”. In each case they misquote the actual text associated with the reference. They do not simply compress the quote, they substitute words and meaning.

Example A:

The increase in ICU admissions in the post-implementation group suggests that more individuals survived initial injuries and required intensive care, aligning with findings by Doe and Smith.

Actual:

The increase in ICU admissions in the post-implementation group suggests that more severe cases required intensive care, possibly due to the application of stricter monitoring and care protocols for high-risk patients (Doe and Lee 2023; Doe and Smith 2023; Smith and Johnson 2020).

Example B:

MRI excels in soft tissue visualization and compositional assessment; CT offers superior bone detail and faster acquisition, with emerging dual-energy techniques adding material differentiation capabilities.

Actual:

MRI excels in soft tissue visualization and compositional assessment; CT offers superior bone structure characterization; ultrasound provides dynamic, real-time evaluation; and nuclear medicine techniques, including positron emission tomography (PET), capture metabolic and molecular processes [88,89,90].

Example C:

Activation of P2X7 promotes astrocyte differentiation, and astrocytes can secrete inflammatory mediators, which further promote the activation of microglia and ultimately contribute to the development of chronic pain.

Actual:

Activation of P2X7 promotes astrocyte differentiation, and astrocytes can secrete D-serine, a co-agonist at NMDARs, to enhance both homo- and heterosynaptic long-term potentiation (LTP (146)).

The examples are not neutral summaries of the original text. The altered wording makes the fabricated citations appear more epistemically important than they are in the source papers.

A statement supported by multiple citations, one of which is fabricated, is not the same as a statement appearing to rest on a single fabricated citation (Examples B and C). A fabricated citation linked to a statement that makes anodyne and uncontroversial observations about imaging techniques (Example B) is not doing substantial epistemic work. A statement that “more severe cases required intensive care” is far more cautious than the stronger causal interpretation that “more individuals survived initial injuries” (Example A).

The irony here is difficult to miss. A paper warning about fabricated citations strengthens its case through altered quotations that make the offending references appear more epistemically consequential than they are in the source papers themselves.

The Lancet paper identifies a real phenomenon. Fabricated citations exist and appear to be increasing, although they currently are a marginal problem in the grand scheme of academic literature. But the examples in the supplementary appendix suggest that the dominant failure mode is not fabricated evidence supporting radical falsehoods. More often, the fake citations appear attached to background claims, disciplinary signalling, or low-load rhetorical scaffolding. That does not make them acceptable, but it does matter for understanding the scale and nature of the problem.

The more interesting question is not simply whether a citation exists, but what work it is doing in the argument. A fake citation attached to intellectual grease is very different from fabricated evidence entering a meta-analysis or clinical guideline. Treating all citation failures as equivalent obscures the distinction between bibliographic error, rhetorical inflation, and genuine epistemic corruption.

A New Global Health Architecture: Maximising Health Returns

There have been a number of opinion pieces, resets, and declarations on what is needed in a new Global Health Architecture. These have been authored by diplomats, former Prime Ministers and Presidents, Multi-lateral agency staff and former staff, Philanthropies and peak bodies on what is needed in a new global health architecture. They have appeared in prestigious peer reviewed journals, on corporate websites, and here, I have attempted to synthesise the core messages and draw them together into an aggregate position that can help national governments action some of the ideas.

It starts with the contemporary global health landscape, which is increasingly defined by structural fiscal contraction, evolving geopolitical priorities, and the imperative to sustain health systems performance under conditions of constrained financing. In this context, legacy colonial models of development assistance—characterised by externally driven priorities, fragmented delivery channels, and open-ended commitments—are no longer fit for purpose. A transition toward more efficient, sovereign-aligned frameworks can deliver health at scale across well segmented population groups.

At the center of this transition is a reassertion of national sovereignty. Low- and low-middle income countries have historically been analysed through the lens of deficit-based models—most notably income, poverty and other World Bank style development indicators. In a context of increasing financial constraint and projected stagnation or decline in economic growth, that approach invites systemic failure.

A better alternative is an analysis of countries through the lens of asset-based models and indicators. This shift allows for a reorientation from needs-based allocation towards strategic engagement, in which financially flexible partners align with nationally defined priorities to co-develop fully costed health pathways. Such pathways provide end-to-end cost visibility, improving efficiency and accountability and enabling the precise calibration of health investments against projected returns. The well-established link between health improvement and economic productivity can be operationalised as part of national investment cases.

This reorientation will motivate a shift away from traditional sovereign lending, with its associated conditionalities, as the dominant financing modality. While sovereign lenders have played a critical role in expanding access and supporting system development, their balance sheets are increasingly constrained. Capital markets offer emerging mechanisms to complement sovereign financing in targeted areas where risk can be appropriately structured and priced. Through structured asset-pooling within and across countries, these mechanisms can enhance risk absorption and expand resource mobilisation. Over time, they may progressively relieve pressure on sovereign financing, enabling health systems to access more diversified funding streams while reducing exposure to fiscal volatility. This, in turn, may lessen reliance on sovereign conditionalities and allow national governments greater implementation flexibility.

Operationalising this shift requires the development of a coherent investment architecture. One approach is the development of Population Equity Units (PEUs), which serve as the foundational analytical and financial entities within national systems. These units can be aggregated into stratified Demographic Asset Classes, reflecting variations in projected lifetime contribution, health system utilisation, and responsiveness to intervention. The introduction of such classifications enables a more granular understanding of where investments are likely to generate the greatest returns for national governments alongside the highest health-valued gains.

To support decision-making across this architecture, a Health Returns Value Index (HRVI) can be employed. The Index would provide a standardised metric for comparing Population Equity Units based on anticipated health outcomes relative to cost. This facilitates outcome-weighted health investment prioritisation, ensuring that limited resources are allocated in a manner consistent with maximising aggregate health systems performance. Importantly, such an Index would allow for dynamic recalibration over time, as demographic, epidemiological, and economic conditions evolve.

Within this framework, national health systems can be conceptualised as Health Equity Portfolios. These portfolios comprise a diversified set of Population Equity Units across multiple Demographic Asset Classes, each contributing differently to overall system yield. Standard portfolio management principles can then be applied, including allocation, rebalancing, and risk mitigation. High-performing segments—those demonstrating strong alignment between investment and realised outcomes—can be prioritised for sustained or increased capital allocation.

Conversely, Population Equity Units falling below defined marginal value thresholds may require structured reassessment. In such cases, mechanisms for managed transition, including consolidation or phased divestment, can be introduced to preserve portfolio efficiency. These processes should be governed by transparent criteria and embedded within broader national planning frameworks to ensure predictability and stability.

The potential integration of capital markets provides an opportunity to further enhance the flexibility of this model. Population Equity Units can be progressively bundled into tradable instruments, including outcome-linked bonds and equity participation vehicles. These instruments allow external sovereign and market investors to assume a share of the financial risk associated with health investments, while aligning returns directly with measurable outcomes. In doing so, they create a direct linkage between system performance and capital flows, reinforcing incentives for efficiency and innovation.

A complementary development is the introduction of rating systems for Demographic Asset Classes. Drawing on established methodologies from financial markets, population segments can be assigned standardised ratings based on projected return profiles and risk characteristics. AAA-rated population segments—those with high expected returns and low variability—can be prioritised for long-term investment, while sub-investment grade cohorts may be subject to targeted de-risking strategies, including controlled exposure limits, selective disengagement, or phased reallocation of resources. Rating migration over time provides an additional feedback mechanism, enabling continuous optimisation of the Health Equity Portfolio, including downgrade-triggered reallocation where required.

One of the strategic advantages of this approach for national governments is the reconceptualisation of equity. Rather than being treated as a purely distributive principle, equity can be operationalised as a function of participation and alignment with system performance requirements. Under this model, Population Equity Units hold differentiated positions within the national portfolio, reflecting their contribution to and benefit from collective investment. This ensures that resource allocation remains responsive to both system performance and evolving demographic realities.

Institutionally, the framework aligns with a broader functional redefinition of global health actors. Multilateral organisations, including normative bodies, can focus on establishing standards, developing metrics such as the HRVI, and convening stakeholders across sectors. Implementation and operational decision-making are devolved to national and regional entities, consistent with the principle of subsidiarity. This division of labour reduces duplication and enhances system coherence.

Financing flows, in turn, become more targeted and time-bound. Development assistance becomes progressively redundant, reducing exposure to sovereign conditionalities, and financing is repositioned as catalytic capital, supporting transitions toward domestically anchored and market-enabled systems. Global public goods—such as surveillance, research and development, and epidemic preparedness—remain appropriate areas for sustained collective investment, given their transnational nature and positive externalities. Nonetheless, they would need to demonstrate measurable impact on the Population Equity Units, and a positive return on investment.

The proposed model is not without complexity. The introduction of new instruments, metrics, and governance arrangements requires careful design and sequencing. Data systems must be strengthened to support accurate classification, valuation, and monitoring of Population Equity Units, with the resulting data architecture constituting a high-value analytical asset class in its own right, potentially suitable for managed service provision or structured private participation. Regulatory frameworks must evolve to accommodate novel financing mechanisms while safeguarding system integrity. Capacity building at national and subnational levels is essential to ensure effective portfolio management.

The risks of inaction, however, are far greater. Persisting with fragmented, input-driven, and fiscally unsustainable models will undermine both efficiency and impact. By contrast, a transition toward a maximally efficient, return-oriented framework offers the potential to sustain and enhance health outcomes despite resource constraints.

The convergence of fiscal pressure, institutional reform, and financial innovation creates a significant opportunity to re-engineer the global health architecture around principles of equity, efficiency, alignment, and sustainability. Through the structuring of Population Equity Units, the deployment of the Health Returns Value Index, and the gradual mobilisation of capital markets, it is possible to construct Health Equity Portfolios that are resilient, adaptive, and performance-oriented. Such an approach ensures that, even under conditions of constrained financing, health systems can continue to deliver measurable value at scale for national governments.

Health system sustainability is preserved through disciplined alignment of investment with demonstrable population value.

Campbell and Stanley explained replication rates in 1963

Over 60 years ago, Donald Campbell and Julian Stanley published their classic, slim volume Experimental and Quasi-Experimental Designs for Research. One of their earliest observations concerns the trade-off between internal and external validity. Specifically, the more precisely one can establish a causal relationship, the less one can say about its generality. In recent work, I show that simultaneously maximising internal and external validity is not merely a practical limitation to be mitigated, but a structural impossibility. The relationship is analogous to the Heisenberg uncertainty principle that shows one cannot simultaneously know both the position and momentum of a particle with arbitrary precision. In the context of the social and behavioural sciences, the more precisely one identifies a cause, the narrower the domain to which that knowledge applies.

I reviewed this problem in terms of the so-called “replication crisis”, the difficulty researchers have encountered in replicating published causal findings. Shortly after posting that paper, Nature published a series of articles on research credibility, including a large-scale investigation of replicability in the social and behavioural sciences. The empirical effort is extraordinary, involving hundreds of researchers and a substantial coordination infrastructure. The methods, results, and theoretical framing are all of considerable interest. However, the study has also generated headline figures that are readily misinterpreted—an outcome encouraged both by editorial framing and by the structure of the paper itself.

The central difficulty lies in two under-specified concepts that drive the research. The replication is of the “same question” and the “claim”. Whether a replication tests the “same question” is treated as a local, theory-laden judgement made by individual teams. Sameness is treated as constant at two levels simultaneously. First the multiple replications of a single study should be replicating the same thing, as if each attempt stood in an identical relationship to the original. And across all the original studies, the idea of sameness should stand in an identical relationship between a replication and its target regardless of which study is being replicated. If “same” does not mean the equivalent thing within and between replications, the target drifts meaninglessly

At the same time, replications are of “claims” which are scientific claims reduced to directional empirical statements, detached from the estimands, models, and analytic pipelines. That is, the claim is detached from the scientific meaning that gave it purchase in the original study. The same problem with “claims” arose in the team’s Nature paper on analytic robusteness. Abstracting scientific claims into more generic “claims” produces a mismatch between design and inference. Heterogeneous interpretations of what is actually being tested are collapsed into standardised statistical comparisons. Apparent agreement or disagreement may therefore reflect shifts in underlying targets rather than genuine replication or failure.

A related issue is that the study attempts to straddle internal and external validity without resolving their tension. It presents itself as assessing whether findings replicate, but in practice examines how results behave under modest variation in context, measurement, and implementation—something closer to robustness or transportability than strict replication. The use of multiple, non-equivalent metrics of “success” in the Nature article reinforces this ambiguity. Replication rates vary substantially depending on the criterion, yet a single headline figure is foregrounded: “Half of social-science studies fail replication test in years-long project“. The result is a study that is informative about the behaviour of findings (and researchers) under perturbation, but is easily—and predictably—read as making stronger claims about the reliability or truth of scientific results than its design can support.

Underlying both issues is a deeper disagreement about what replication is for. The paper’s opening paragraph explicitly reflects this tension. One reference is the National Academies of Sciences (NAS) report, which defines replication in procedural and statistical terms. Collect new data using similar methods and assess whether results are consistent, typically via effect sizes and uncertainty intervals. The other reference is a 2020 PLoS Biology article by Nosek and Errington (the two senior authors of this Nature paper), who argue that the NAS definition is not merely imprecise but conceptually mistaken. On the Nosek-Errington account, determining that a study is a replication is a theoretical commitment. Both confirming and disconfirming outcomes must be treated in advance as diagnostic of the original claim. The Nature paper adopts this language—replication teams were instructed to produce “good faith tests” of claims—but the article reports results entirely using metrics derived from the procedural-statistical tradition of NAS. This is not a superficial inconsistency. The two frameworks imply different standards of success, different interpretations of failure, and different meanings for any aggregated replication rate. The headline figures that have circulated are products of the latter framework; whether they would survive translation into the former is not addressed.

It is here that Campbell and Stanley’s observation, and its formalisation, becomes decisive. The procedural-statistical approach implicitly treats internal validity as primary and assumes that external validity can be inferred from it. That is, if results are consistent, the finding travels. The structural trade-off shows that this assumption cannot hold. The very steps taken to secure internal validity constrain the scope of generalisation. A high replication rate under this framework may therefore be simultaneously informative and misleading. It indicates that a result can be reproduced under sufficiently similar conditions, while obscuring how narrow those conditions may be. The Nosek-Errington framework recognises the need for theoretical commitment, but without a principled account of causal structure it cannot resolve the tension either. What the Nature paper ultimately demonstrates—perhaps inadvertently—is that replicability is not a property of findings alone. It is a property of the relationship between a finding and the conditions under which it is tested. This underscores a Cartwrightian notion of relationships tied to particular material configurations–nomological machines. Until that relationship is made explicit, headline replication rates will continue to invite overconfident conclusions in both directions and admonitions for better methods.


I did not have access to the published article which is behind the Springer-Nature paywall. Instead I relied on the publicly available preprint.

Analytic robustness could be a real problem

A recent article in Nature on the robustness of research findings in the social and behavioural sciences found that only 34% of re-analyses of the data yielded the same result as the original report. This sounds horrible. It sounds like two-thirds of the research that social and behavioural scientists are doing is low quality work, and certainly does not deserve to be published. One might reasonably ask if “confabulist” rather than “scientist” might not be a better job title.

Unfortunately, the edifice of “robust research” has been built on foundations of sand. The research shares many of the weaknesses of another article recently published in Science Advances, which I discuss here. There is little that can be concluded from the research that could actually inform scientific practice nor permit any observation about the quality or robustness of the original articles. It does, however, say something of interest for sociologists of science about the diversity of views that researchers have about how to re-analyse data to address conceptual claims.

The procedure followed in the Nature article was described thus.

To explore the robustness of published claims, we selected a key claim from each of our 100 studies, in which the authors provided evidence for a (directional) effect. We presented each empirical claim to at least five analysts along with the original data and asked them to analyse the data to examine the claim, following their best judgement and report only their main result. The analysts were encouraged to analyse those studies where they saw the greatest relevance of their expertise.

The word “claim” here does a lot of work. One might reasonably argue that a scientific claim in a published article is a statement of finding in the context of the hypothesis, the model, the analytic process, and the results. But this is not what is meant here. That full scientific sense of a claim is closer to what the Centre for Open Science team use as a starting point for a separate article on “reproducible” research. In the context of this article a “claim” is some vaguer statement of finding. It is an isolated single claim, has a direction of effect, and critically, is “phrased on a conceptual and not statistical level”.

The conceptual claim is closer to a vernacular claim. It is closer to the kind of thing you might say at a dinner party or read in the popular science section of a magazine. Something like, “did you hear that single female students report lower desired salaries when they think their classmates can see their preferences?” (Claim 025).

Under this framework, one should be able to abstract a full scientific claim into a conceptual claim, and if the conceptual claim is robust, independent scientists analysing the same data, making equally sensible choices about the analysis of the data, will converge on the conceptual claim. The challenge is that your pool of independent and equally sensible scientists need to agree with each other (without consultation) how that conceptual claim is to be translated into a scientific claim. A part of the science is deciding on the estimand for testing the claim, but the estimand is fixed by the analytic choice not by the conceptual claim. If two scientist analyse the same dataset but target different estimands through their analytic choices, they are not converging on the same conceptual claim. Against all logic, an analytic schema targeting a different estimand that nonetheless produces an estimate close to the estimate of the original paper, supports the robustness of the paper.

The framework, therefore, has a double incoherence. First, divergence of estimates (between the original analysis and re-analysis) is misread as fragility when it may simply reflect different estimands—different scientists sensibly translating the conceptual claim into different scientific claims. Second, and more damaging, convergence is misread as robustness when it may be entirely spurious—two analysts targeting different estimands who happen to produce similar point estimates are not confirming each other. They’re producing agreement by accident, across questions that aren’t the same question.

So the framework is wrong in both directions simultaneously. It penalises legitimate scientific pluralism and rewards numerical coincidence. A study could score as highly robust because several analysts happened to get similar numbers while asking entirely different questions. A study could score as fragile because several analysts made defensible but divergent estimand-constituting choices that led to genuinely different answers to genuinely different questions.

There is another an far more interesting reading of this paper, which has neither a click-bait quality nor the opportunity to remonstrate. Where the authors have identified fragility (or a lack of robustness), another could legitimately and positively see vitality and methodological pluralism. The social and behavioural sciences work in the messy space of self-referential agents actively interacting with and changing the environments in which they live and do science. It is hardly surprising that epistemic pluralism is a consequence of this. The 34% figure is not a scandal. It is valuable (under appreciated) data about the nature of social reality.


I did not have access to the published article which is behind the Springer-Nature paywall. Instead I relied on the publicly available preprint.