Category Archives: Sociology

In the words of Wikipedia: Sociology is the study of social behaviour or society, including its origins, development, organization, networks, and institutions. In the context of this blog, it usually relates to the sociology of health.

Join the Q: Chasing journal indicators of academic performance

Universities are predisposed to rank each other (and be ranked) by performance, including research performance. Rankings are not merely about quality, they are about perception. And perception translates nominal prestige into cash through student fees, block grants from government, as well as research income. As a consequence, there is a danger that universities may chase indicators of prestige rather than thinking about the underlying data that informs the indicator, and what the underlying data might mean for understanding and improving performance.

This image comes from an article published in The Conversation under a creative commons license. see https://bit.ly/2ExLDNB

Ranking has become so crucial in the life of universities that it infuses the brickwork and is absorbed by us each time we brush along the walls. At one time, when evaluating research performance, an essential metric was the number of publications. That calculus has shifted and it is no longer enough to publish. Now we have to publish in Q1 journals; i.e., journals ranked by impact factor in the top 25%. Those ranked in the next 26th to 50th percentile are Q2, and so forth. Unfortunately, Q-ranking encourages indicator chasing. It has a level of arbitrariness that discourages thoughtful choices about where to publish, and leads to such unhelpful advice as, “publish in more Q1 journals”.

The Q-ranking game was brought home to me in a recent discussion among colleagues in medical education research about where they should publish. In this discussion BMC Medical Education was identified as a poorly ranked journal (Q3) that should not be considered.

I like and entirely approve of publishing research in the very best journals that one can, and encouraging staff to publish in high-quality journals is a good thing. “Best” and “high quality”, however, is not just about the impact factor and the Q-ranking of a journal. The best journal for an article is the journal that can create the greatest impact from the work, in the right area, be that in research, policy, or practice. A personally, highly cited article in a low impact factor journal may be better than a poorly cited paper in high impact factor journal.

Some years ago I was invited by a government research council to review the performance of a university’s Health Policy Unit. One of my fellow panel members was very focused on the poor ranking of most of the journals into which this unit was publishing. The director of the unit tried to defend the record. She argued that it was more important that the publications were policy-relevant than that they were published in a prestigious journal. The argument was cut down by the research council representative. From the representative’s point of view, the government had to allocate funds, and the journal ranking was an important mechanism for evaluating the return on investment.

I did a quick back of the envelope calculation. It was true, the unit had published in some pretty ordinary journals — not an article in The Lancet among them. However, if one treated the collection of papers published by the unit as if the unit was a stand-alone journal, the impact factor exceeded PLoS Medicine, a highly regarded Q1 journal. My argument softened the opposition to refunding the unit, but it did not completely deal with it because the research council didn’t care about the individual papers. They wanted prestige, and Q-ranking marked prestige.

So, which medical education journal should you publish in? The advice was blunt. The university uses a Thomson Reuters product, Journal Citation Reports (JCR) to determine the Q-ranking of journals. BMC Medical Education ranks quite poorly — Q3 — so don’t publish there. The ranking in this case, however, was based on journals bundled into a comparison pool that JCR calls “Education, Scientific Disciplines”. This comparison pool includes such probably excellent (and completely irrelevant) journals as Physical Review Special Topics-Physics Education Research and Studies in Science Education. However, if one adopts the “Social Sciences, General” pool of comparison journals, which JCR also reported, BMC Medical Education jumps from a Q3 to a Q1 journal. And this raises the obvious question, what is the true ranking of BMC Medical Education?

The advice about where to publish explicitly dismissed an alternative source for the Q-ranking of journals, Scimago Journal Ranking (SJR), because it was too generous — with the implication that “generous” meant “not as rigorous”. In fact, it appears that the difference between SJR and JCR is about the pool of journals used for the comparison. Both SJR and JCR treat the pool of journals against which the chosen journal should be compared as a relatively static one. But it is not. The pool against which the journal should be compared (assuming one should do this at all) is dependent on the kind of research being reported and the audience. Consider potential journals for publishing a biomedical imaging paper. The Q-ranking pool could be (1) general medical journals, (2) journals dealing with medical imaging, (3) radiology journals, or (4) radiography journals, (5) some more refined subset of journals. As with BMC Medical Education, the Q-rank of prospective journals could be quite different in each pool.

One might reasonably, and rhetorically ask, did the value of the science or the quality of the work change because the comparison pool changed? This leads to a small thought experiment. Imagine a world in which every journal below Q1 suddenly disappeared. The quality of the remaining journals has not changed, but three-quarters of them are suddenly Q2 and below. (As an aside, this is reminiscent of the observation that half of all doctors are below average).

If a researcher works in a single discipline, learning which are the preferred journals to publish in becomes second nature. If one work across disciplines, then the question is not as clear. The question is no longer, “which are the highest ranked journals?”, but “which are the highest ranked journals given this article, and these possible disciplinary choices?”. If the question is, “which journal will have the most significant impact of the type that I seek?”, then Q-ranking is only relevant if the outcome sought is to publish in a Q1 journal from a particular comparison pool. If one seeks some other kind of impact, like policy relevance or change in practice, then the Q-rank may be of no value.

Indicators of publishing quality should not drive strategy. Strategy should be inspired by a vision of excellence and institutional purpose. If you want an example of how chasing indicators can have a severe and negative impact, have a look at this (Q1!!!!) paper.

 

Prevalence of sexual assault at Australian Universities is … non-zero.

A few days ago the  Australian Human Rights Commission (AHRC) launched Change the course, a national report on sexual assault and sexual harassment at Australian universities lead by Commissioner Kate Jenkins. Sexual assault and sexual harassment are important social and criminal issues, and the AHRC report is misleading and unworthy of the gravity of the subject matter.

It is statistical case-study in “how not to.”

The report was released to much fanfare, receiving national media coverage including TV and newspapers, and a quick response from universities. “At a glance …” the report highlights among other things:

  • 30,000+ students responded to the survey — remember this number, because (too) much is made of it.
  • 21% of students were sexually harassed in a university setting.
  • 1.6% of students were sexually assaulted in a university setting.
  • 94% of sexually harassed and 87% of sexually assaulted students did not report the incidents.

From a reading of the survey’s methodology, any estimates of sexual harassment/assault should be taken with a shovel-full of salt and should generate no response other than that of the University Of Queensland’s Vice-Chancellor, Peter Høj‘s, that any number greater than zero is unacceptable. What we did not have before the publication of the report was a reasonable estimate of the magnitude of the problem and, notwithstanding the media hype, we still don’t.  The AHRC’s research methodology was weak, and it looks like they knew the methodology was weak when they embarked on the venture.

Where does the weakness lie?  The response rate!!!

A sample of 319,252 students was invited to participate in the survey.  It was estimated at the design stage that between 10 and 15% of students would respond (i.e., 85-90% would not respond) (p.225 of the report).  STOP NOW … READ NO FURTHER.  Why would anyone try to estimate prevalence using a strategy like this?  Go back to the drawing board.  Find a way of obtaining a smaller, representative sample, of people who will respond to the questionnaire.

Giant samples with poor response rates are useless.  They are a great way for market research companies to make money, but they do not advance knowledge in any meaningful way, and they are no basis for formulating policy. The classic example of a large sample with a poor response rate misleading researchers was the Literary Digest poll to predict the outcome of the 1936 US presidential election.  They sent out 10 Million surveys and received 2.3 Million responses.  By any measure, 2.3 Million responses to a survey is an impressive number.  Unfortunately for the Literary Digest, there were systematic differences between responders and non-responders.  The Literary Digest predicted that Alf Landon (Who?) would win the presidency with 69.7% of the electoral college votes.  He won 1.5% of the electoral college votes.  This is a lesson about the US electoral college system, but it is also a significant lesson about the non-response bias. The Literary Digest had a 77% non-response rate; the AHRC had a 90.3% non-response rate. Who knows how the 90.3% who did not respond compare with the 9.7% who did respond?  Maybe people who were assaulted were less likely to respond and the number is a gross underestimate of assaults.  Maybe they were more likely to respond and it is a gross overestimate of assaults.  The point is that we are neither wiser nor better informed for reading the AHRC report.

Sadly, whoever estimated the (terrible) response was even then, overly optimistic.  The response rate was significantly lower than the worst-case scenario of 10% [Response Rate = 9.7%, 95%CI: 9.6%–9.8%].

In sharp contrast to the bad response rate of the AHRC study, the Crime Victimisation Survey (CVS) 2015-2016, conducted by the Australia Bureau of Statistics (ABS) had a nationally representative sample and a 75% response rate — fully completed!  That’s a survey you could actually use for policy.  The CVS is a potentially less confronting instrument, which may account for the better response rate.  It seems more likely, however, that recruiting students by sending them emails is neither sophisticated enough nor adequate.

Poorly conducted crime research is not merely a waste of money, it trivialises the issue.  The media splash generates an illusion of urgency and seriousness, and the poor methodology means it can be quickly dismissed.

If there is a silver lining to this cloud, it is that AHRC has created an excellent learning opportunity for students involved in quantitative (social)  research.

Addendum

It was pointed out to me by Mark Diamond that a better ABS resource is the 2012 Personal Safety Survey, which tried to answer the question about the national prevalence of sexual assault.  A Crime Victimisation Survey is likely to receive a better response rate than a survey looking explicitly at sexual assault.  I reproduce the section on sample size from the explanatory notes because it highlights the difference between a well conducted survey and the pile of detritus reported by AHRC.

There were 41,350 private dwellings approached for the survey, comprising 31,650 females and 9,700 males. The design catered for a higher than normal sample loss rate for instances where the household did not contain a resident of the assigned gender. Where the household did not contain an in scope resident of the assigned gender, no interview was required from that dwelling. For further information about how this procedure was implemented refer to Data Collection.

After removing households where residents were out of scope of the survey, where the household did not contain a resident of the assigned gender, and where dwellings proved to be vacant, under construction or derelict, a final sample of around 30,200 eligible dwellings were identified.

Given the voluntary nature of the survey a final response rate of 57% was achieved for the survey with 17,050 persons completing the survey questionnaire nationally. The response comprised 13,307 fully responding females and 3,743 fully responding males, achieving gendered response rates of 57% for females and 56% for males.

 

 

 

 

 

Who will guard the journals? Gender bias in the “Big Five” medical journals.

Journals, by which I mean Editors, have shaped modern science, particularly in medicine. The publication policies of journals now direct the kinds of ideas that are acceptable, how to present the ideas and the ethical frameworks that should govern data collection, authorship, treatment of participants, and data sharing. Journals will refuse to publish a paper if they are not satisfied that the authors have fulfilled those requirements. The journals have become both arbiters and gatekeepers of sound scientific practice. A recent journal issue on conflicts of interest appearing in the Journal of the American Medical Association (JAMA) is a case in point (2 May 2017).

Editors will also self-publish encyclicals of good conduct, laying down the rules of engagement for the future. The JAMA editorial supporting the recent special issue is such an example. When the journals involved are at the top of their fields, these views reverberate. In medicine, the Big Five journals in general and internal medicine are the New England Journal of Medicine (NEJM), Lancet, JAMA, British Medical Journal, and Annals of Internal Medicine. When the Editors speak, the field listens. Their role is revelatory. It is an imperfect conduit of nature’s voice whispered to researchers in their labs and clinics.

The rules do not, unfortunately, prevent the publication of bad science. The Autism-MMR paper in the Lancet is an excellent example of bad science slipping into the field. In general, however, failures of science lie at the feet of the scientists. The journals rise above it. A retraction here, a commentary there, and the stocks or pillory of peer humiliation are kept for the authors.

It is easy when criticism can be deflected, and laid at the feet of authors. What, however, should the response be when researchers identify a bias in the Big Five journals? Bias in medicine is a serious issue. It indicates a skew in the published science – a tendency to emphasise one kind of science over another or the promotion of one interest over another. It carries risks into the future of skewing practice and funding.

In 2016, Giovanni Filardo and colleagues identified a gender bias in first authors of research articles published in the Big Five. The journals were more likely to publish articles with a man as a first author than a woman. The most biased journal was NEJM. You will not have read about the research in that journal, however, because they rejected the paper when it was submitted. Unfortunately, the bias in the gender of published first authors is not a local, journal issue. The bias has a larger and more insidious career effect. Women are less likely to be in the prestigious position of the first author in prestigious Big Five journals, and ceteris paribus they are disadvantaged in funding applications, job applications, receipt of awards, and recognition.

My co-authors and I recently published an investigation of gender bias in clinical case reports. You may be unsurprised to learn that clinical case reports are more likely to be about men. Apparently, clinical cases about a man are just more interesting than a clinical case about a woman. All but one of the investigated journals showed a gender bias, and the most biased journal was NEJM.

Of course, journals can and should reject research papers that are not relevant or deficient in quality. And our paper may have been both. The fact that a journal like the NEJM should have rejected two recent papers that identified the journal as being the most gender-biased among the Big Five begins to look like an avoidance of criticism.

If there is a tendency to avoid self-reflection, particularly in an area as important as bias in science, then the editorial decisions begin to have much greater significance, and at least a whiff of hypocrisy. The origins of a bias may be authorial. A greater proportion of articles written by men than women are submitted to the journal; a greater proportion of clinical case reports about men rather than women are submitted to the journal. The Editors are in a position to correct the submission bias, just as they vigorously correct other biases. The Big Five would have acceptance rates below 10%; they presumably have a bias towards higher rather than lower quality science. We are suggesting that in exercising their Editorial judgment they could include factors they have (presumably) hitherto not noticed in their own behavior. They might find it easier to explain these editorial shifts if they based it on scientific research published in their own journals. At the very least it indicates that the issue is taken seriously.


This article was co-written by Daniel D Reidpath and Pascale Allotey

Would you give knee surgery to the FAT MAN?

I do understand your plight, Mr Smith.  An arthritic knee can be extremely painful.  And you say it’s so bad you can’t even walk from the living room to the kitchen.  That’s actually very good news!  Yes, yes … awful … but terribly good news. If you can’t walk to the kitchen, you can’t eat. If you can’t eat you’ll lose weight.  And the faster you lose weight, the sooner we’ll schedule your knee surgery.

On 15 March 2017, Dr David Black, NHS England’s medical director for Yorkshire and the Humber, sent a letter of praise to the Rotherham Clinical Commissioning Group (RCCG).  The RCCG had decided to restrict the access to smokers and “dangerously overweight patients” of hip and knee surgery.  The letter was leaked, and it has triggered, according to the Guardian, “a storm of protest.”

The title of this blog is a play on David Edmond’s book, Would you kill the fat man, an exploration of moral philosophy and difficult choices about the valuation of human life. The RCCG’s decision intrigued me. It was essentially a decision about rationing a finite commodity — healthcare. In a world of plenty, rationing healthcare is a non-question.  In the real world, however, in a world of shrinking healthcare budgets and a squeezed NHS, resources must be allocated in a way that means some people will receive less healthcare or no healthcare.  Fairness requires that the rules of allocation are transparent and reasonable.

While you ponder, whether you would give knee surgery to the FAT MAN, I have a follow-up question.  Would you want to see a doctor who would deny you knee surgery because of some characteristic of yours unrelated to whether you would benefit from knee surgery?

I am sorry Mrs Smith, today we decided not to offer clinical services to women, people under 5’7″, or carpenters. We need to cut the costs of our clinical services, and by excluding those groups, we can save an absolute bundle.

I have heard it said of the doctor, academic and human rights advocate, Paul Farmer, that he would regularly re-allocate hospital resources from Boston to his very needy patients in Haiti.  He used to raid the drug stocks of a Boston hospital, stuff them in his suitcase and fly them back to his patients in Haiti.  I have no idea if the story is true or not. It does mark, however, one of the great traditions of medicine.  The role of a doctor is to advocate vigorously for the health (and often social) needs of the patient.  The patient actually in front of them.  The one in need.  Because, if your doctor will not advocate for your health needs, who will?  This is why all the great TV hospital dramas show a clash between the doctor and the hospital administrator.  Administrators ration.  Doctors treat.  The doctor goes all out to save little Jenny, against all odds.  The surly hospital administrator stands in front of the operating room, hand outstretched and declares (Pythonesque): “None shall pass.”

Under the current NHS system of clinical commissioning groups, there are family doctors who are simultaneously trying to make rational decisions about the allocation of limited resources to a population, and trying to be the best health advocates for the patient in front of them.  That screams conflict of interest. If you live in the catchment area of the RCCG and want my advice, check out which doctors are part of the RCCG.  If your doctor is one of them, change doctor immediately. Treating you, advocating for your health interests is what you need and should want.  Unfortunately, if she is part of the RCCG when she is treating you, you are not her principal concern.  Run(!) assuming of course that you don’t need knee surgery.

Should smokers and overweight people receive knee surgery?  Let’s start with smokers.  Why would you not want to treat a smoker?  It is difficult to come up with arguments that are not so outrageous that they are embarrassing to make. But I won’t let personal embarrassment get in the way of stating the top two silly arguments that came to mind:

  1. Smoking is a disgusting habit and anyone who smokes deserves all the pain they get?
  2. Smokers won’t live as long as non-smokers, so the investment in surgery to reduce pain and improve mobility in smokers will not have the net benefits to society as the same investment in non-smokers.

The arguments for restricting the surgery to people who are not overweight are similarly cringe-worthy.  There are also clinical reasons for prioritising the overweight.  The load on joints resulting from increased weight creates greater wear-and-tear and, the broader inflammatory processes that obesity triggers also seem to increase the risks of osteoarthritis — affecting hands as well as knees.  [See for example, here and here].

I can’t find the RCCG’s arguments for restricting access to knee surgery for smokers and people who are overweight, but prima facie it looks a lot like a variant of victim blaming.

Full disclosure.  I am all for the rational allocation of resources.  I think smoking is a disgusting habit. I am overweight and trying to do something about it.  I also think that the arguments for resource allocation need to be more explicit about the social values upon which they are often implicitly based.