Author Archives: Daniel Reidpath

Public Health is not a specialisation of medicine

Medicine saves lives one at a time. Public Health saves lives by the millions.

In many countries, the guilds of the medical fraternity provide for specialist membership. Attached to membership is prestige, promotion, and increased earning potential. In almost all cases, membership or fellowship of one of these guilds, typically titled “Colleges”, indicates increased expertise in the management of classes of disease in individual patients.

If you have diabetes, atrial fibrillation, Parkinson’s disease, major depression, etc., or you need more or less specialised surgery, you may well want to consult a member of one of these guilds of medicine.

 

Vaccination programs are critical to Public Health, but they do not require a medical specialisation in Public Health. [Image source: pixnio.com]

The focus of Public Health is the protection and improvement of the health of populations. The breadth of public health practice is enormous with individuals working in disease specific areas (e.g., HIV, TB, or mental health); settings (e.g., schools, workplaces, markets); social policy areas of the social determinants of health; health systems; health financing and market regulation; urban design; and health data analytics, to name just a few. Although there are commonalities between them, Public Health may be contrasted with Community Medicine and Social Medicine by the fact that Public Health practitioners do not spend their time treating individual patients, although they may guide services for the better and more efficient treatment of populations of patients.

The most significant distinction is that Public Health draws its expertise from a wide range of disciplines: behavioural sciences, nursing, management, geography, history, politics, anthropology, environmental sciences, urban planning, sociology, pharmacy, economics, biostatistics, microbiology, ecology, mathematics, parasitology, computer science, entomology, engineering, veterinary science, … and medicine. Some of the best public health people I have ever worked with have come from history and geography. It is not that history and geography are peculiarly crucial to Public Health. It is that good Public Health requires interdisciplinary teams that can bring new perspectives to problems. It is relatively unusual to find historians and geographers in Public Health, so they bring novel solutions that are quite different from those one might otherwise see.

Postgraduate Public Health training, such as a Masters of Public Health (MPH), is a useful way of providing the diverse disciplines involved in Public Health a common language with which to share problems, ideas, and solutions. There is no one best discipline for Public Health, and there is no reason that one has to study Public Health formally to make a valuable contribution to Public Health practice. I speak here as a person who has no formal qualification in Public Health but one who has been a Professor of Public Health, has lead Public Health teams, and has advised governments, UN agencies and international NGOs on Public Health.

I return to my titular point. Public Health benefits enormously from the input of people with a diverse range of qualifications. What then is the purpose of a medical specialisation in Public Health, if Public Health is not a branch of medicine?

The answer is historical and political. The historical answer is that Public Health is traditionally located within the Ministry of Health (MOH). There is a logic to this. So much of the practice of Public Health is about the coordination, regulation and efficient delivery of health services that it must be coordinated with MOH activities. The obvious down-side of this historical location of Public Health is that, as it has become increasingly evident that population health problems require whole of government approaches, any attempts to transcend the departmental pillars of government are regarded by other Ministries as a MOH power-grab.

Politically, power within MOH is typically vested in people with membership in one of the specialist guilds of medicine. The only way for Public Health to have status in MOH (and let’s face it, Public Health has never been as sexy as clinical medicine) is for it to be lead by people with a medical qualification and membership of a specialist guild. Thus, specialist guilds of Public Health medicine were born.

This historical and political strategy protected the status of Public Health within MOH. It provided a career pathway for medically qualified personnel interested in pursuing a career in Public Health. Unfortunately, it also limited the capacity of Public Health practice to deliver the best population health outcomes.

Governments need to improve the way they approach the protection, promotion and improvement of the health of their populations. A good start is to recognise that medicine is a part of the practice of Public Health (just as history, geography, etc. are), but Public Health is much bigger than a specialisation of medicine.

Globalisation and health

The past has already been written and the accolades distributed. We now need to decide whether the next century is going to be good or bad for our health, and the role of globalisation in helping us to determine our destiny. People living in failed states do not enjoy utopian, anarchic freedom. They die young. Healthy populations need the goods and services of society to be shared in a broadly inclusive fashion. They need health systems that can respond rapidly and flexibly to emerging disease. They need environments that support human life.

 

The zombie apocalypse is our least likely but most entertaining future. [image from proprofs.com]

70,000 years ago our ancestors took their first steps out of Africa. With those steps they initiated the binding link between globalisation and health. The difference between then and now is a matter of temporal and geographical scale. Then, nothing moved faster than a walking pace. Now, a person can traverse the globe in 24 hours. A city thousands of kilometres away can be destroyed in 30 minutes. An idea can be everywhere in seconds.

The technological advances of the last century have been kept pace by extraordinary improvements in human health. Average life expectancy barely moved until the beginning of the last century, and over the next hundred years, it doubled. In 2016, the global average life expectancy was 71.4 years of age. We had achieved the biblical entitlement of three score and ten years promised in Psalm 90. The improvements in health were achieved because of globalisation. Reductions in poverty. Improvements in food supply. Advances in healthcare. Sophisticated infrastructure was delivering clean water and carrying away waste. Those advances have also been accompanied by large inequalities in health outcomes and significant environmental degradation.

I suggest there are three broad intersections between globalisation and health. First, there is the real (and sometimes imagined) disease outbreaks: Ebola or the Zombie apocalypse. Infectious disease, however, is only one part of the health and globalisation relationship. The second, very modern concern is the interconnection between our global activities and environmental change, and by extension the impact on human health. The final idea is our relationships with each other, and how these relationships can shift, and the effect the changes may have on the availability of health supporting resources.

I sketched these ideas out in a 3,000 word essay in early 2017 at the invitation of the Editors of “Vaguardia dossier” a Spanish language, Catalan magazine. Many people (including myself) cannot read the published, Spanish version, but you can get the slightly rough, English language preprint here.

Reidpath DD, Globalización y salud [Globalisation and health]. Vanguardia dossier. 2017; 65:76-81

Join the Q: Chasing journal indicators of academic performance

Universities are predisposed to rank each other (and be ranked) by performance, including research performance. Rankings are not merely about quality, they are about perception. And perception translates nominal prestige into cash through student fees, block grants from government, as well as research income. As a consequence, there is a danger that universities may chase indicators of prestige rather than thinking about the underlying data that informs the indicator, and what the underlying data might mean for understanding and improving performance.

This image comes from an article published in The Conversation under a creative commons license. see https://bit.ly/2ExLDNB

Ranking has become so crucial in the life of universities that it infuses the brickwork and is absorbed by us each time we brush along the walls. At one time, when evaluating research performance, an essential metric was the number of publications. That calculus has shifted and it is no longer enough to publish. Now we have to publish in Q1 journals; i.e., journals ranked by impact factor in the top 25%. Those ranked in the next 26th to 50th percentile are Q2, and so forth. Unfortunately, Q-ranking encourages indicator chasing. It has a level of arbitrariness that discourages thoughtful choices about where to publish, and leads to such unhelpful advice as, “publish in more Q1 journals”.

The Q-ranking game was brought home to me in a recent discussion among colleagues in medical education research about where they should publish. In this discussion BMC Medical Education was identified as a poorly ranked journal (Q3) that should not be considered.

I like and entirely approve of publishing research in the very best journals that one can, and encouraging staff to publish in high-quality journals is a good thing. “Best” and “high quality”, however, is not just about the impact factor and the Q-ranking of a journal. The best journal for an article is the journal that can create the greatest impact from the work, in the right area, be that in research, policy, or practice. A personally, highly cited article in a low impact factor journal may be better than a poorly cited paper in high impact factor journal.

Some years ago I was invited by a government research council to review the performance of a university’s Health Policy Unit. One of my fellow panel members was very focused on the poor ranking of most of the journals into which this unit was publishing. The director of the unit tried to defend the record. She argued that it was more important that the publications were policy-relevant than that they were published in a prestigious journal. The argument was cut down by the research council representative. From the representative’s point of view, the government had to allocate funds, and the journal ranking was an important mechanism for evaluating the return on investment.

I did a quick back of the envelope calculation. It was true, the unit had published in some pretty ordinary journals — not an article in The Lancet among them. However, if one treated the collection of papers published by the unit as if the unit was a stand-alone journal, the impact factor exceeded PLoS Medicine, a highly regarded Q1 journal. My argument softened the opposition to refunding the unit, but it did not completely deal with it because the research council didn’t care about the individual papers. They wanted prestige, and Q-ranking marked prestige.

So, which medical education journal should you publish in? The advice was blunt. The university uses a Thomson Reuters product, Journal Citation Reports (JCR) to determine the Q-ranking of journals. BMC Medical Education ranks quite poorly — Q3 — so don’t publish there. The ranking in this case, however, was based on journals bundled into a comparison pool that JCR calls “Education, Scientific Disciplines”. This comparison pool includes such probably excellent (and completely irrelevant) journals as Physical Review Special Topics-Physics Education Research and Studies in Science Education. However, if one adopts the “Social Sciences, General” pool of comparison journals, which JCR also reported, BMC Medical Education jumps from a Q3 to a Q1 journal. And this raises the obvious question, what is the true ranking of BMC Medical Education?

The advice about where to publish explicitly dismissed an alternative source for the Q-ranking of journals, Scimago Journal Ranking (SJR), because it was too generous — with the implication that “generous” meant “not as rigorous”. In fact, it appears that the difference between SJR and JCR is about the pool of journals used for the comparison. Both SJR and JCR treat the pool of journals against which the chosen journal should be compared as a relatively static one. But it is not. The pool against which the journal should be compared (assuming one should do this at all) is dependent on the kind of research being reported and the audience. Consider potential journals for publishing a biomedical imaging paper. The Q-ranking pool could be (1) general medical journals, (2) journals dealing with medical imaging, (3) radiology journals, or (4) radiography journals, (5) some more refined subset of journals. As with BMC Medical Education, the Q-rank of prospective journals could be quite different in each pool.

One might reasonably, and rhetorically ask, did the value of the science or the quality of the work change because the comparison pool changed? This leads to a small thought experiment. Imagine a world in which every journal below Q1 suddenly disappeared. The quality of the remaining journals has not changed, but three-quarters of them are suddenly Q2 and below. (As an aside, this is reminiscent of the observation that half of all doctors are below average).

If a researcher works in a single discipline, learning which are the preferred journals to publish in becomes second nature. If one work across disciplines, then the question is not as clear. The question is no longer, “which are the highest ranked journals?”, but “which are the highest ranked journals given this article, and these possible disciplinary choices?”. If the question is, “which journal will have the most significant impact of the type that I seek?”, then Q-ranking is only relevant if the outcome sought is to publish in a Q1 journal from a particular comparison pool. If one seeks some other kind of impact, like policy relevance or change in practice, then the Q-rank may be of no value.

Indicators of publishing quality should not drive strategy. Strategy should be inspired by a vision of excellence and institutional purpose. If you want an example of how chasing indicators can have a severe and negative impact, have a look at this (Q1!!!!) paper.

 

Donald Trump’s BMI: getting the measure of the man.

I find myself fascinated by a pointless lie because it is inescapably tragic. All it can do is diminish the person in the eyes of others. And this brings us to Donald Trump’s height. In January 2018, the Physician to the President, Ronny L. Jackson MD asserted that Donald Trump was 6’3″ tall (1.90m). This is so unlikely to be true, that it stretches credulity. There is no reason for Jackson to lie spontaneously about a patient’s height, and it seems probable that he was encouraged to add a few inches by the President himself.

When asked to self report height both men and women in the US tend to overstate it.  Burke and Carman have suggested that overstating height is motivated by social desirability — you can never be too tall. There is ample evidence of Donald Trump’s (misplaced) search for the socially desirable with respect to his hair, his tan, his ethnicity, his intelligence and now his height.

In 2018 we learnt that Donald Trump was officially not quite Obese (body mass index (BMI) <30), and in 2019 he had nudged over the line into the obese range (BMI 30). Overstating height creates a problem in the calculation of BMI — which is mass (in kilograms) divided by height (in meters squared). Given that Donald Trump is likely shorter than 1.9m (6’3″), and probably closer to 1.854m (6’1″) this will have implications for whether he was really obese in 2018 (not just overweight as stated by his Physician) and just how obese he probably is (Figure 1).

Figure 1: Donald Trump’s BMI in 2018 and 2019 given different assumptions about his height [R-code here].

In 2018 Donald trump was just below the obese category if and only if he was really 6’3″ (1.9m) tall.  At any height less than that he was obese in 2018 and he is obese today.  His most likely true height given comparisons with others (cf, Barack Obama) is 6’1″, and this puts him comfortably in the obese range.

Misrepresenting one’s height does not create a problem if the lie is reserved for others — except perhaps in a political sense. Problems arise if one deludes oneself. Telling others that you are taller and healthier than you really are is one thing; if you lie to yourself you cannot properly manage your health.