Category Archives: AI

Authorised Speech and Token Restraint

At the Bafta film awards last Sunday, a man in the audience shouted the N-word while two Black actors were on stage. The BBC broadcast it. The fallout was considerable.

The man was John Davidson, a Tourette syndrome activist whose life story had inspired one of the nominated films. He has Tourette’s. He didn’t choose to shout it. He was, by his own account, distraught. His statement afterwards was careful and precise: “My tics have absolutely nothing to do with what I think, feel or believe. It’s an involuntary neurological misfire. My tics are not an intention, not a choice and not a reflection of my values.”

Most people accepted this. But it raises a question that is harder than it looks. The word came from his brain, through his vocal tract, in his voice. It was linguistically formed—not a grunt or a spasm but a semantically loaded utterance. Something in him produced it. If it wasn’t him, who was it? And if it wasn’t him, where exactly does he end?

The standard move here is to invoke volition. We hold people responsible for their words because we assume intent. Remove intent, and the moral framework dissolves. Davidson didn’t mean it; therefore, it wasn’t really his; therefore, he bears no responsibility. Case closed. But this doesn’t actually answer the philosophical question. It just sidesteps it. Because here is the thing about Davidson’s tics that deserves closer attention. They are not random. They are contextually coherent. Unpleasant, shocking, certainly disruptive, but coherent. At the ceremony, host Alan Cumming made a joke involving Paddington Bear and his own sexuality. Davidson’s tics responded with homophobic slurs and the word “paedophile”—triggered, he explained later, because Paddington is a children’s character. Something in his system was tracking the semantic content of what was being said. It identified what was transgressive in context. It reached for the worst available word. Then it fired. That is not noise. That is a process with its own logic, running in parallel with Davidson’s conscious attention, with access to his semantic knowledge, and occasionally—when the usual controls fail—with access to his voice.

There is a useful way to think about this borrowed from how AI large language models work. A language model operates in high-dimensional continuous space. Vast amounts of computation happen there—pattern recognition, semantic association, something that functions like reasoning. None of it is directly visible. What we see is the output: a sequence of tokens, one after another, a flat stream of words.

The projection from that internal computation to the token stream is lossy. Much of what happens in the model never surfaces as language. The token stream is not the computation. It is a particular kind of readout of the computation, filtered and serialised into the only form we can directly receive. Now consider what controls what gets into that stream. There is, in effect, a gate. Not everything the model computes becomes output. The gate is part of what shapes the model’s behaviour, its apparent character, what it will and won’t say. It is what makes people like one model and hate another.

This is roughly what neuroscience suggests about the human case, though it arrived at the conclusion from a different direction. The self is the author and publisher. The self is not the computation, but the editing function. What goes out, not what gets thought.

Michael Gazzaniga‘s split-brain research in the 1960s showed that the left hemisphere acts as an “interpreter”—it observes behaviour generated by other systems and constructs a retrospective narrative of unified authorship. We don’t experience ourselves as unified because we are. We experience it because one subsystem is very good at telling that story after the fact. The verbal self—the “I” that speaks, explains, claims authorship—may be less the source of thought than its narrator. It sees the outputs of processes it didn’t run and reports them as its own decisions. On this view, what we call the “I” is substantially the gate—the function that governs what reaches speech from “computation”, what gets claimed, what gets published as the self’s output. Normally the gate and the computation are so tightly coupled that we can’t distinguish them. Tourette’s decouples them. The gate fails for certain kinds of output, and we see that the substrate was not unified to begin with.

Davidson’s distress is entirely coherent under this account. He is not distressed because he acted against his values. He is distressed because something that looked like him did. Something with access to his voice, his semantic knowledge, his body, but not under the control of the function he identifies as himself.

This reframes the question slightly. We tend to ask: what caused the tic? And the answer—some misfiring in the basal ganglia, a failure of inhibitory control—while true is also incomplete. The more interesting question is: what normally prevents the tic? What is the gate, and what runs it?

In ordinary cognition there may be a great deal happening in the substrate that never gets tokenised into speech—not because it isn’t there, but because something governs what reaches the output. Much of the brain’s activity is never published. Although the phrase, “he has no filter between his brain and his mouth”, which my wife often says of me, suggests that control is imperfect. The verbal self is the name we give to whatever makes that editorial decision and claims authorship.

When the gate fails, we don’t see randomness. We see coherent sub-processes that were running all along, now briefly with access to the channel they’re normally denied. The tic is not an intrusion from outside. It is an internal process that has temporarily escaped editorial control.

Now consider what certain comedians do for a living.

Dave Chappelle, early Richard Pryor, in a different moral register Bernard Manning—the act is partly constructed around the comedian having deliberate access to the gate in a way the audience doesn’t. They say the thing the audience is computing but suppressing. The laugh is partly recognition, partly relief, partly the vicarious experience of the gate being lifted by someone else’s hand. The comedian is an authorised, licensed publisher of material the rest of us keep in the substrate. The skill—and it is a genuine skill—is knowing exactly how far to go, when to pull back, and how to ensure the frame holds. Controlled gate failure, performed for an audience that has consented to the performance.

Chappelle’s career is substantially about making this mechanism visible. His famous walkaway from a $50 million deal was articulated partly in these terms—he became uncertain whether the audience was laughing with the subversion or simply enjoying having the gate lifted on material they wanted to consume without guilt. Whether he was controlling the frame or the frame was controlling him. Whether his speech was authorised-as-subversion or merely authorised-as-release.

That is the knife edge. A Tourette’s tic and a Chappelle bit can produce the same word in the same room, but the intentional structure is entirely different. One is gate failure. The other is gate performance. Except Chappelle’s anxiety—the anxiety that ended his show—was that the performance might be providing cover for something closer to the former. That the laughter was coming from a place the comedy wasn’t actually reaching.

Manning is the case where the performance defence eventually collapsed. The gate, it turned out, was the man.

Dementia approaches from the opposite direction, and in some ways is the starkest case of all. In Tourette’s the gate fails selectively and intermittently. In dementia it degrades systematically as the substrate that runs it is physically destroyed. You can watch the editorial function diminish over months and years. What typically goes first is not memory in the crude sense but the social and executive apparatus—the machinery that governs what gets said, to whom, in what context. The person starts saying things they would previously have filtered: sexual remarks, racial language from fifty years ago, brutal assessments of people in the room. Families often find this the most distressing feature of the disease, more than the memory loss itself. The person seems to have become someone else—crueller, coarser, unrecognisable.

But the logic developed here suggests the opposite reading. They have not become someone else. The editor has gone, and what remains is substrate that was always there, now publishing without authorisation. The language from half a century ago was always in the network. The judgements about the people in the room may reflect something that was always computed but never passed the gate.

This is uncomfortable. It implies that a significant portion of what we think of as a person’s character—kindness, decency, tact, a person’s goodness in daily life—may be substantially gate rather than ground. Who we are is not what we compute but what we suppress. The consoling counter is that the gate is real. The suppression is itself a genuine expression of values, not mere performance. Davidson’s distress is evidence of that. The narrator who identifies with the gate is genuinely not the process that produced the tic. A publisher who refuses to print something ugly is making a real choice, even if the ugly thing exists somewhere in the system. But dementia strips that away and leaves the question uncomfortably open. How much of the person we loved was the computation, and how much was the editing?

Speech is authorised in two senses. It is permitted—cleared for publication by whatever runs the gate. And it is authored—it carries the signature of a self, it is owned, it counts as an expression of who we are. Normally these travel together so seamlessly that we treat them as one thing. Davidson’s tic, Chappelle’s comedy, and a person with late-stage dementia saying something unforgivable to their daughter—each in a different way pulls them apart.

The “I” is not the thinker. It is the publisher, the tokeniser of thought to speech. And the question of who we really are may depend, more than we would like, on what we choose not to print.

 

“AI Wrote That!”

“Some run from brakes of ice, and answer none; And some condemned for a fault alone.” [Measure for Measure, Act 2, Scene 1]

If only we could all write like Shakespeare. It’s sonorous, timeless, replete with metaphor and meaning. Now we have AI slop swilling around the internet. Finding something written by a human, something genuine, something worth reading, is like trying to pick out the orations of Cicero in a sports crowd as they roar for a touchdown. If you let yourself, you could drown in that cacophony of information.

The appearance of generative AI and its effectively infinite capacity to…well…generate has meant that you, poor reader, are now faced with the literary equivalent of a Deli full of lunchmeat—homogenised words with colouring and preservatives.

We need better ways of writing. We need to return to the old ways—a kind of writing where the artist, steeped in craft, can mold and form a narrative or argument and render it in a single draft. I am thinking here, at best, of a cuneiform tablet. But I would settle for ink, quill and velum. That is the true measure of the art.

We blame AI, but things really started to go wrong in the late 19th Century. The combination of wood-pulp technology and the Fourdrinier machine made paper cheap and available. And as paper became more affordable, thinking got lazier. Loose, ill-considered mutterings and on-the-fly musings could now be committed to paper and reworked through multiple drafts. There was no allegiance to de novo refined precision.

László Bíró, inventor of the ballpoint pen, and Marcel Bich, mass producer of the same, need to shoulder some of the blame (1933-1956). Even with the ready availability of paper, the blotches and smudgings of the maladroit kept many wannabe writers out of the market. Some thought they had good ideas, but manual dexterity was a solid benchmark for well-constructed prose.

The manual typewriter became a ubiquitous domestic item in the 1960s. Liquid paper had already been invented, which meant we could all become monkeys at the keyboard, randomly pecking in the hope of producing Shakespeare. These were followed in rapid succession by the electric typewriter and the electric typewriter with correction tape.

Between 1978 and 1983, authorship was no longer bound to paper. WordStar, WordPerfect, and Microsoft Word, running on personal computers, freed the illiterate to create everything from a letter to Grandma or a eulogy to a first novel. Effort and thought were gone. “Writing” was a mindless process of rinse-and-repeat. Spellcheck, grammar check, word suggestions, thesaurus (for the truly illiterate—or as I like to call them, the analphabetic) and “suggestions”.

And here we are—2025. Editors are inundated with crap because everyone is now a writer.

Claude, write me a bawdy Limerick proving the infinity of primes.

A strumpet proved primes never cease
By shagging each one for a piece,
She’d finish the set,
Find one larger yet,
Her clients increased without peace.

OK! It’s not Shakespeare. But it is a curiosity—a two-minute amusement. It’s also worth thinking about how that limerick comes to exist. The GenAIs are not monkeys at a typewriter. They are constrained. They respond to prompts. The outputs are not random. You might get lucky and one of the generative AI engines immediately produces a limerick worth two minutes of your life. The chances are, however, you will get dross, or it will be a proof, but it won’t be bawdy, or it will be bawdy, but it won’t be a proof. You will need to go back and forth with the AI, refining, editing, and selecting. It was your idea—a bawdy proof. You refined and selected. For a five-line limerick, it might not take much time and effort, but it does require some—and that process is creative.

When photography first appeared on the scene in the second half of the 19th Century, it was seen as the end of painting, because all painting was an attempt to reproduce reality perfectly (Not!). And all photography was the perfect reproduction of reality (also Not!). Photography is now accepted as an art form, although not always. The technology, however, is mechanical, and…. Where is the art?

I heard a story told of the renowned art photographer, Robert Maplethorpe. A woman commissioned him to take her photograph. He took dozens and dozens of photos on the day. When the woman returned some weeks later to receive her portrait, she was not entirely happy with it and asked Maplethorpe if she could see the other photos taken on the day. He refused. The other photographs are not “Maplethorpes”, he explained.

The production of the art might rely on a mechanical device—but the composition, the lighting, the post-production, and most importantly, the aesthetic choice is entirely in the hands of the artist. Maplethorpe might have been able to render a portrait in a fraction of the time it would take to paint the same picture—that is a matter of medium, however, not artistic merit.

If Shakespeare be the measure of literary art, then, Houston, we have a problem. Who in 2025 knows what that line from Measure for Measure means: “Some run from brakes of ice, and answer none; And some condemned for a fault alone”?

The Bard himself is unintelligible to the reader—and he is rarely, if ever, translated into modern English. The translation is an affront to the author as artist, which is ironic because Shakespeare almost certainly would have embraced the idea.

If he were translated, we might get any of the three following forms. There is the poetic and adherent: “Some hide in icy coverts, shun the call; and some are judged for but a single fall”. There is the plainer meaning: “The guilty hide and prosper; the unlucky answer once and fall”. And there is the prosaic: “Some people evade justice entirely by hiding and refusing to answer charges, while others are condemned for committing just one offence”.

The problem with AI is not that working closely with it cannot produce things of merit and worth: curated, thoughtful, and illuminating—things artistic and authored. The problem is the volume. We are looking for grains of black sand on a shore of white sand.

To judge “AI Wrote That!” as a dismissive and condemnatory act is as useful as looking at a Maplethorpe and declaring, “That’s a Photograph!”


ps: AI did not write this, except where it did.

Building Research Capacity with AI

Over 25 years ago, the “10/90 gap” was used to illustrate the global imbalance in health research. Only 10% of global research benefited the regions where 90% of preventable deaths occurred. Since then, efforts to improve research capacity in low- and middle-income countries (LMICs)—where 90% of avoidable deaths occurred—have made important gains; nonetheless, significant challenges remain. A quarter of a century later, there are still too few well-trained researchers in LMICs, and their research infrastructure and governance are also inadequate. The scope of the problem increased dramatically in 2025 when governments cut North American and European overseas development assistance (ODA, i.e., foreign aid) precipitously. That aid—however inadequate—supported improvements in research capacity.

Traditional approaches to improving research capacity, such as training workshops and degree scholarship programs, have gone some way to address the expertise challenge. However, they fall short because they are not scalable. The relatively recent introduction of massive open online courses (MOOCs), such as TDR/WHO’s MOOCs in implementation research, goes a long way to overcoming that scalability problem—at least in instruction-based learning. Nonetheless, for many LMIC researchers, major bottlenecks remain because of poor or limited access to mentorship, one-off and quick advice, bespoke training, research assistance, and inter- and intra-disciplinary collaboration. The scalability problem can leave them at a persistent disadvantage compared to their high-income country counterparts. Research is not done well from isolation and ignorance.

The rise of large language model artificial intelligence (LLM-AIs) such as ChatGPT, Mistral, Gemini, Claude, and DeepSeek offers an unprecedented opportunity…and some additional risks. LLM-AIs are advanced AI models trained on vast amounts of text data to understand and generate human-like language. They are flexible, multilingual, and always available (24/7), offering researchers in LMICs immediate access to knowledge and assistance. If used correctly, LLMs could revolutionise approaches to building research capacity and democratise access to skills, knowledge, and global scientific discourse. Many online educational providers already integrate LLM-AIs into their instructional pipelines as tutors and coaches.

Unfortunately, LMICs risk further entrenching or increasing the 10/90 gap if they cannot take advantage of the benefits of LLM-AIs.

AI as a game changer

Researchers in resource-limited settings can access an always-on, massively scalable assistant for the first time. By massively scalable, every researcher could have one or more 24/7, decent research assistants for a monthly subscription of less than $20. They offer scalability and flexibility that traditional human research assistants cannot (and should not) match. However, they are not human and may not fully replicate a human research assistant’s nuanced understanding and critical thinking—and they are certainly less fun to have a cup of coffee with. Furthermore, the effectiveness of LLM-AIs depends on the sophistication of the user, the task complexity and the quality of input the user provides.

I read a recent post on LinkedIn by a UCLA professor decrying the inadequacies of LLM-AIs. However, a quick read of the post revealed that the professor had no idea how to engage appropriately with the technology.

Unfortunately, like all research assistants, senior researchers, and professors, LLM-AIs can be wrong. Like all tools, one needs to learn how to use them with sophistication.

In spite of any inadequacies, LLM-AIs can remove barriers to research participation by offering tutoring on complex concepts, assisting with literature reviews and data analysis, and supporting the writing and editing of manuscripts and grant proposals.

Reid Hoffman, the AI entrepreneur, described on a podcast how he used LLM-AIs to learn about complex ideas. He would upload a research paper onto the platform and ask, “Explain this paper as if to a 12-year-old”. Hoffman could then “chat” with the LLM-AI about the paper at that level. Once comfortable with the concepts, he would ask the LLM-AI to “explain this paper as if to a high school senior”. He could use the LLM-AI as a personal tutor by iterating-up in age and sophistication.

Researchers can also use the LLM-AIs to support the preparation of scientific papers. This is happening already because an explosion of generically dull (and sometimes fraudulent) scientific papers is hitting the market. This explosion has delighted the publishing houses and created existential ennui among the researchers. The problem is not the LLM-AIs—it is in their utilisation, and it will take time for the paper production cycle to settle.

While access to many LLMs requires a monthly subscription, some LLM-AIs, like DeepSeek, significantly lower costs and accessibility barriers by distributing “open weights models”. Researchers can download these open weights models freely and put them on personal or university computer infrastructure without paying a monthly subscription. They make AI-powered research assistance viable for most LMIC research settings, and universities and research institutes can potentially lower the costs further.

LLM-AIs allow researchers in LMICs to become less dependent on high-income countries for training and mentorship, shifting the balance towards scientific self-sufficiency. AI-powered tools could accelerate the development of a new generation of LMIC researchers, fostering homegrown expertise and leadership in relevant global science. They are no longer constrained by the curriculum and interests of high-income countries and can develop contextually relevant research expertise.

The Double-Edged Sword

Despite its positive potential, the entry of LLM-AIs into the research world could have significant downsides. Without careful implementation, existing inequalities could be exacerbated rather than alleviated. High-income countries are already harnessing LLM-AIs at scale, integrating them into research institutions, project pipelines, training, and funding systems. LMICs, lacking the same level of investment and infrastructure, risk being left behind—again. The AI revolution could widen the research gap rather than close it, entrenching the divide between well-resourced and under-resourced institutions.

There is also a danger in how researchers use LLM-AIs. They are the cheapest research assistants ever created, which raises a troubling question: will senior researchers begin to rely on AI to replace the need for training junior scientists? Suppose an LLM-AI can summarise the literature, draft proposals, and assist in the analysis. In that case, there is a real risk that senior researchers will neglect mentorship, training and hands-on learning. Instead of empowering a new generation of LMIC researchers, LLM-AIs could be used as a crutch to maintain existing hierarchies. If institutions see the LLM-AIs as a shortcut to productivity rather than an investment in building research capacity, it could stall the development of genuine human expertise.

Compounding these risks, AI is fallible. LLM-AIs can “hallucinate”, generating false information with complete confidence. They always write with confidence. I’ve never seen one write, “I think this is the answer, but I could be wrong”. They can fabricate references, misinterpret scientific data, and reflect biases embedded in their training data. If used uncritically, they could propagate misinformation and skew research findings.

The challenge of bias is not to be underestimated. LLM-AIs are trained on the corpus of material currently available on the web, reflecting all the biases of the web–who creates the content, what content they create, etc.

Furthermore, while tools like DeepSeek reduce cost barriers, commercial AI models still pose a financial challenge. LMIC institutions will need to negotiate sustainable access to AI tools or risk remaining locked out of their benefits—particularly of the leading edge models. The worst outcome would be a scenario where HICs use AI to accelerate their research dominance while LMICs struggle to afford the very tools that could democratise access.

A Strategic Approach

To ensure LLM-AIs build rather than undermine research capacity in LMICs, they must be integrated strategically and equitably. Training researchers and students in AI literacy is paramount. Knowing how to ask the right questions, validate AI outputs, and integrate results into research workflows is essential. This is not a difficult task, but it takes time and effort, like all learning. The LLM-AIs can help with the task—effectively bootstrapping the learning curve.

Rather than replacing traditional research capacity building, LLM-AIs should be embedded into existing frameworks. MOOCs, mentorship programs, and research fellowships should incorporate LLM-AI-based tutoring, iterative feedback, and language support to enhance—not replace—human mentorship. The focus should be on areas where LLM-AI can offer the greatest immediate impact, such as brainstorming, editing, grant writing support, statistical assistance, and multilingual research dissemination.

Institutions in LMICs should also push for local, ethical LLM-AI development that considers regional needs. This push is easier said than done, particularly in a world of fracturing multilateralism. However, appropriately managed, LLM-AI models can be adapted to recognise and integrate local research priorities rather than merely reinforcing an existing scientific discourse. The fact that a research question is of no interest in high-income countries does not mean it is not critically urgent in an LMIC context.

Finally, securing affordable and sustainable access to AI tools will be essential. Governments, universities, and research institutions must lobby for cost-effective AI licensing models or explore open-source alternatives to prevent another digital divide. Disunited lobbying efforts are weak, but together, across national boundaries, they could have significant power.

An Equity Tipping Point

The LLM-AI revolution is a key juncture for building research capacity in LMICs. Harnessed correctly, LLM-AIs could break down long-standing barriers to participation in science, allowing LMIC researchers to compete on (a more) equal footing. The rise of models like DeepSeek suggests a future where AI is not necessarily a privilege of the few but a democratised resource for the many.

Fair access will not happen automatically. Without deliberate, ethical, and strategic intervention, LLM-AIs could reinforce existing research hierarchies. The key to harvesting the benefits of the technology lies in training researchers, integrating LLM-AIs into programs to build research capacity and securing equitable access to the tools. Done well, LLM-AIs could be a transformative force, not just in scaling research capacity but in redefining who gets to lead global scientific discovery.

LLM-AIs offer an enormous opportunity. They could either empower LMIC researchers to chart their own scientific futures, or they could become another tool to push them further behind.


Acknowledgment: This blog builds upon insights from a draft concept note developed by me (Daniel D. Reidpath), Lucas Sempe, and Luciana Brondi from the Institute for Global Health and Development (Queen Margaret University, Edinburgh), and Anna Thorson from the TDR Research Capacity Strengthening Unit (WHO, Geneva). Our work on AI-driven research capacity strengthening in LMICs informed much of the discussion presented here.

The original draft concept note is accessible here.

A Christmas Story

In the last year of the reign of Biden, there was a ruler in Judea named Benyamin. He was a man of great cunning and greater cruelty.

In those days, Judea, though powerful, was a vassal state. Its strength was created through alliances with distant empires. It wielded its might with a fierce arm and harboured a deep hatred for its neighbors. Benyamin, fearing the loss of his power, sought to destroy the Philistines on that small strip of land called Gaza, and claim it for himself.

For over four hundred and forty days and nights, he commanded his armies to bomb their towns and villages, reducing them to rubble. The Philistines were corralled, trapped within walls and wire, with no escape. Benyamin promised them safety in Rafah and bombed the people there. He offered refuge in Jabalia, and bombed the people there.

In Gaza, there was no safety and there was no food.

Even as leaders wept for the Philistines, they sold weapons to Benyamin and lent him money to prosecute his war. Thus, the world watched in silence as the Philistines endured great suffering. Their cries rose up to heaven, seemingly unanswered.

And so it came to pass, in the last days of the last year of Biden, there was a humble Philistine named Yusouf born of the family of Dawoud. Before the war, Yusouf had been a mechanic. He worked hard each day fixing tires and carburetors, changing break-pads and exhaust systems. And at the end of each day, he would return home to his young wife, Mariam. The same Mariam, you may have heard of her, who was known for her inexhaustable cheerfulness.

That was before the war. Now Mariam was gaunt and tired, and heavy with child.

On the night of the winter solstice, in a dream, a messenger came to Yusouf. “Be not afraid, Yusouf”, the messenger said. “Be not afraid for yourself, for the wife you love so very much, or for your son—who will change the world. What will be, will be and was always meant to be”. Yusouf was troubled by this dream, and found himself torn between wonder, happiness, and fear. Mariam asked him why he looked troubled, but he said nothing and kept his own counsel.

The following night the same messenger visited Mariam in her dreams. Mariam was neither afraid nor troubled. The next morning she had a smile on her face that Yusouf had not seen for so long he had almost forgotten it. “It is time, Yusouf”, she said. “We have to go to the hospital in Beit Lahiya.”

Yusouf was troubled. Long ago he had learned to trust Mariam, but his motorbike had no fuel and it was a long walk. Too far for Mariam, and they were bombing Beit Lahiya. He remembered the words of the messenger in his dreams and he went from neighbour to neighbour. A teaspoon of fuel here, half a cup there. No one demanded payment. If they had any fuel, no one refused him. Having little, they shared what they had. It was the small act of kindness that binds communities. Yusouf wept for their generosity.

When he had gathered enough fuel, he had Mariam climb on the bike. Shadiah, the old sweet seller who had not made a sweet in over a year and could barely remember the smell of honey or rosewater, helped her onto the back.

Yusouf rode carefully. He weaved slowly around potholes and navigated bumps. In spite of his care, he could feel Mariam tense and grip him tighter. And then the motorbike stopped. A last gasping jerk and silence. The fuel was spent.

The late afternoon air was cooling as he helped Mariam walk towards the hospital. When they arrived at the gate, a porter stopped, them. “They’re evacuating the hospital. You can’t go in”, the porter told them. Yusouf begged. “My wife, she is going to give birth,” he told the porter—who could plainly see this for himself. The porter looked at Mariam and took pity. “You can’t go in, but there is a small community clinic around the corner. It was bombed recently, but some of it, a room or two, is still standing. I’ll send a midwife.”

Yusouf gently guided Mariam to the clinic. He found an old mattress on a broken gurney and a blanket. He lay it on the floor and settled Mariam.

If there had been a midwife—if she had ever arrived… if she had ever got the porter’s message—she would have been eager to retell the story of the birth. Sharing a coffee, with a date-filled siwa, she would have painted the picture. Mariam’s face was one of grace. Yusouf anxiously held her hand. The baby came quickly, with a minimum of fuss, as if Mariam was having her fifth and not her first.

Yusouf quickly scooped up the baby as it began to vocalise it’s unhappiness with the shock of a cold Gaza night. He cut the cord crudely but effectively with his pocket knife. And it was only as he was passing the the baby to Mariam that he looked confused. He did not have the son he was promised, he had a daughter. The moment was so fleeting that quantum physicists would have struggled to measure the breadth of time, and Yusouf smiled at the messenger’s joke.

Because there was no midwife to witness this moment, we need to account for the witnesses who were present. There was a mangy dog with a limp looking for warmth. He watched patiently and, once the birth was completed, he found a place at Mariam’s feet. There were three rats that crawled out of the rubble looking for scraps. They gave a hopeful sniff of the night air and sat respectfully and companionably on a broken chair. As soon as the moment passed, they disappeared into the crevices afforded by broken brick and torn concrete. Finally, there was an unremarkable cat. In comfortable fellowship, they all watch the moment of birth knowing that, tomorrow or the next day, they were mortal enemies, but tonight there was peace.

“Nasrin”, Yousuf whispered in Mariam’s ear as he kissed her forehead. “We’ll call her Nasrin.” The wild rose that grows and conquers impossible places.

There was a photo journalist called Weissman, who heard from the porter that was a very pregnant woman at the clinic. “She’s about to pop”, the porter said. Weissman hurried to the bombed out clinic so that he could bear witness to this miracle in the midst of war.

He missed the birth. And when he arrived, he did not announce his presence. It seemed rude. An intrusion on a very private moment. It did not, however, stop him from taking photos for AAP.

He later shared those images with the world. Yusouf lay on the gurney mattress, propped against a half destroyed wall. Mariam was lying against him, exhausted, eyes closed, covered in a dirty blanket. The baby Nasrin was feeding quietly, just the top of her head with a shock of improbably thick dark hair peeking out. Yousuf stared through the broken roof at the stars in heaven. The blackness of a world without electricity made resplendent. He looked up with wonderment and contentment on his face. He was blessed, he thought. No. They were blessed. The messenger was right.

As Weissman picked his way in the dark towards the hospital gate, where he had last seen the porter, he shared the same hope that he had seen on Yusouf’s face. New life can change things.

The night sky lit up, brightening his path to the hospital. He turned back and was awed by a red flare descending slowly over the remains of the clinic as if announcing a new beginning to the world. A chance for something different was born here today.

The explosion shook the ground and Weissman fell. Cement and brick dust from where the clinic had stood rose sharply in to the air. An avalanche of dust raced towards him.