Monthly Archives: November 2024

What the Democrats did wrong

I feel like the guy at the all-you-can-eat buffet who didn’t understand that the prudent consumer treats it as an all-you-should-eat buffet. I am an avid consumer of modern American politics, but you can get too much of a good thing. The most recent Sandwiches on a table the buffetpresidential election was a long, exhausting, roller-coaster ride that crashed at the end. It also has a fair probability of being the most consequential US election since Roosevelt or possibly (hopefully not) Lincoln.

Now the election is over, the dessert trolley has caught my eye. I am fascinated by the public rending of garments and loud wailing as Democrats simultaneously mourn their loss and look for someone to blame.

One of the best characterizations of these empty, futile gestures was The Daily Show comedienne Desi Lydic’s shredding.

Vice President Kamala Harris ran what many regard as an extraordinarily disciplined campaign in an extraordinarily short time. She did this after picking up the reins dropped by an obviously geriatric Joe Biden. Joe mumbled and stumbled through the first three-quarters of the campaign before bowing out. Thank you, Joe!

In the German Federal election held on 6 November 1932, the two best-performing parties were the NSDAP and the SPD. The former outperformed the latter by 15.7 points. I imagine the SPD (the Social Democratic Party) had a postmortem. Why did we do so badly, they ask themselves? And I can’t help but wonder, following the loss to Donald Trump, what advice would the modern American Democratic strategists offer the SPD for their next election. What lessons might some Democrats think they have learned that are worthy of passing on? Here are some thoughts about that conversation.

[Democrat]: Guys, we just lost our own bruising election. We are you, and we have advice. It would help if you broadened your appeal. You lost the working class. And remember, you’re just coming out of the Great Depression, so repeat after me … “It’s the economy stupid!!!” If you’re going to beat the NSDAP, you need to step it up. And for God’s sake, stop with the identity politics.

[SPD]: We hear what you say, but our platform does support workers’ rights and trade unions. We advocated for unemployment benefits and social welfare programs. Our policies focused on dealing with the economic fallout of the Great Depression. We know it’s about the economy(!), and we’re not stupid!

Well, your messaging is obviously all wrong. The voters aren’t hearing you, or they don’t believe you. Sometimes, they need a point of focus, a metaphor for their troubles. Have you thought about demonising a particular group or class of people who you could blame for controlling the economy? Tell them about a group they can hate. Bankers are a good place to start—and if you can link your Bankers to some out-group!

Also, you really need to get some distance from the Marxists.

We hate the Marxists. Our platform is explicitly anti-communist.

Yeah … but the voters aren’t hearing you or don’t believe you. Could you give them a point of focus? Make the abstract real. Have you thought about demonising a particular group or class of people who you could identify with the communists? Create a mental portrait of an evil Bolshevik in their minds … Find an out-group and make that group the hated Bolshevik … A communist out-group—that’s a group everyone can hate.

And jobs! Don’t forget jobs. Why should good, pure German citizens not have greater opportunities to engage in the workforce? Is there a group of people who are not quite your equals, a sort of under-person who you could target, some foreigner who’s stealing your jobs?

Adolf Hitler, the leader of the NSDAP (Nazi Party), had a “go to” group to hate—the Jews . But he had hate to spare. His tent of loathing had a broad canopy: Communists, Slavs, Gays, Roma.… Hitler’s ire had two focigroups for public demonisation and groups who threatened his power.

In September 1930, Hitler’s Nazi Party won only 2.6% of the popular vote. In July 1932, it won 18.3%. In November 1932, it won 37.3% —the first time the Nazis outperformed the Social Democrats. That was also the last free election before Hitler seized power in 1933. Mein Kampf (“My Struggle”) is his political manifesto/autobiography. In it, he introduced the idea of the “Big Lie”, a lie that is so audaciously false that some people will believe it. Fake News!

Let’s return to the original question: what did the Democrats do wrong?

I have heard people did not know who Kamala Harris was. They couldn’t vote for someone they didn’t know. They most certainly did know Donald Trump! He had been their 45th President. Donald Trump was there for all to see. He praised whites and demonised non-whites. He gave government assistance to allies and punished (perceived) enemies. He praised authoritarians, demagogues, and dictators and railed against democratic allies. He argued for the Justice Department to target his enemies. He overstated the threats posed by protesters and suggested that the troops should shoot them. He engineered the loss of reproductive rights. He gave billions of dollars of taxpayers’ money to other billionaires.

He is a racist who says he is the least racist person he knows. A rapist who says that no one loves and respects women more than he does (shudder…). He boasts about his sexual assaults. He is a fraud—“no one knows business better than I do”. An insurrectionist. A notorious liar. As Nancy Pelosi says, if his lips move, he’s lying.

If the electoral choice is between a demagogue and a lawn chair, you vote for the lawn chair. But you didn’t have a lawn chair. You had had a smart, accomplished woman of color, who has worked for the people for most of her adult life.

What lesson should the Democrats learn from their electoral loss? Easy. Over 50% of the American voters in the last election lost their moral compass. They either agree with, don’t care, or are easily duped by a power-hungry, narcissistic, charlatan, showman, and felon.

My advice for the future. Give no quarter. Do not edge to the right—you will not win the descent into hell. Do not forsake your values. Do not forsake the vulnerable. Drag your republic back to where it should be. Call out the demagogue for who he is. Defend every liberty and every right you have—and defend those rights for others. When they take them from you (and they will take them from you), do not accede.

Fighting words(!), but cheap, essentially worthless advice. I am a detached observer only able to consume American politics from a distant armchair. It may affect me—as it will affect the rest of the world—but I have no power to affect it. Nonetheless, I can hardly wait for my next visit to the buffet.

UKRI go its A.I. policy half right

UKRI AI policy: Authors on the left. Assessors on the right

UKRI AI policy: Authors on the left. Assessors on the right (image generated by DALL.E)

When UKRI released its policy on using generative artificial intelligence (A.I.) in funding applications this September, I found myself nodding until I wasn’t. Like many in the research community, I’ve been watching the integration of A.I. tools into academic work with excitement and trepidation. In contrast, UKRI’s approach is a puzzling mix of Byzantine architecture and modern chic.

The modern chic, the half they got right, is on using A.I. in research proposal development. By adopting what amounts to a “don’t ask, don’t tell” policy, they have side-stepped endless debates that swirl about university circles. Do you want to use an A.I. to help structure your proposal? Go ahead. Do you prefer to use it for brainstorming or polishing your prose? That’s fine, too. Maybe you like to write your proposal on blank sheets of paper using an HB pencil. You’re a responsible adult—we’ll trust you, and please don’t tell us about it.

The approach is sensible. It recognises A.I. as just one of the many tools in the researcher’s arsenal. It is no different in principle from grammar-checkers or reference managers. UKRI has avoided creating artificial distinctions between AI-assisted work and “human work” by not requiring disclosure. Such a distinction also becomes increasingly meaningless as A.I. tools integrate into our daily workflows, often completely unknown to us.

Now let’s turn to the Byzantine—the half UKRI got wrong—the part dealing with assessors of grant proposals. And here, UKRI seems to have lost its nerve. The complete prohibition on using A.I. by assessors feels like a policy from a different era—some time “Before ChatGPT” (B.C.) was released in November 2022. The B.C. policy fails to recognise the enormous potential of A.I. to support and improve human assessors’ judgment.

You’re a senior researcher who’s agreed to review for UKRI. You have just submitted a proposal using an A.I. to clean, polish and improve the work. As an assessor, you are now juggling multiple complex proposals, each crossing traditional disciplinary boundaries (which is increasingly regarded as a positive). You’re probably doing this alongside your day job because that’s how senior researchers work. Wouldn’t it be helpful to have an A.I. assistant to organise key points, flag potential concerns, help clarify technical concepts outside your immediate expertise, act as a sounding board, or provide an intelligent search of the text?

The current policy says no. Assessors must perform every aspect of the review manually, potentially reducing the time they can spend on a deep evaluation of the proposal. The restriction becomes particularly problematic when considering international reviewers, especially those from the Global South. Many brilliant researchers who could offer valuable perspectives might struggle with English as a second language and miss some nuance without support. A.I. could help bridge this gap, but the policy forbids it.

The dual-use policy leads to an ironic situation. Applicants can use A.I. to write their proposals, but assessors can’t use it to support the evaluation of those proposals. It is like allowing Formula 1 teams to use bleeding-edge technology to design their racing cars while insisting that race officials use an hourglass and the naked eye to monitor the race.

Strategically, the situation worries me. Research funding is a global enterprise; other funding bodies are unlikely to maintain such a conservative stance for long. As other funders integrate A.I. into their assessment processes, they will develop best-practice approaches and more efficient workflows. UKRI will fall behind. This could affect the quality of assessments and UKRI’s ability to attract busy reviewers. Why would a busy senior researcher review for UKRI when other funders value their reviewers’ time and encourage efficiency and quality?

There is a path forward. UKRI could maintain its thoughtful approach to applicants while developing more nuanced guidelines for assessors. One approach would be a policy that clearly outlines appropriate A.I. use cases at different stages of assessment, from initial review to technical clarification to quality control. By adding transparency requirements, proper training, and regular policy reviews, UKRI could lead the way with approaches that both protect research integrity and embrace innovation.

If UKRI is nervous, they could start with a pilot program. Evaluate the impact of AI-assisted assessment. Compare it to a traditional approach. This would provide evidence-based insights for policy development while demonstrating leadership in research governance and funding.

The current policy feels half-baked. UKRI has shown they can craft sophisticated policy around A.I. use. The approach to applicants proves this. They need to extend that same thoughtfulness to the assessment process. The goal is not to use A.I. to replace human judgment but to enhance it. It would allow assessors to focus their expertise where it matters most.

This is about more than efficiency and keeping up with technology. It’s about creating the best possible system for identifying and supporting excellent research. If A.I. is a tool to support this process, we should celebrate. When we help assessors do their job more effectively, we help the entire research community.

The research landscape is changing rapidly. UKRI has taken an important first step in allowing A.I. to support the writing of funding grant applications. Now it’s time for the next one—using A.I. to support funding grant evaluation.