

The Skeptics Guide to Emergency Medicine
Dr. Ken Milne
Meet ’em, greet ’em, treat ’em and street ’em
Episodes
Mentioned books

Jan 7, 2021 • 17min
SGEM Xtra: Happy New Year 2021
Date: January 7th, 2021
Happy New Years to all the SGEMers. I know 2020 has been a bit of a dumpster fire. We have all faced challenges During the COVID19 global pandemic. I tried not to contribute to the large volume of information coming out on Sars-Cov2.
There were only four episodes that directly addressed COVID19:
SGEM Xtra: Mask4All Debate
SGEM#229: Learning to Test for COVID19
SGEM Xtra: CAEP National Grand Rounds - COVID19 Treatments
SGEM#309: That’s All Joe Asks of You – Wear a Mask
There have been many other FOAMed resources (REBEL EM, First10EM, EM Cases, St. Emlyn's, and others) that have done a great job covering the pandemic.
This is an SGEM Xtra episode to announce a few exciting new things for 2021.
SGEM Continuing Medical Education Credits
The BIG news is that the SGEM will now be offering Continuing Medical Education (CME) credits for all SGEM episodes. Click on this LINK to find out more.
The Skeptics' Guide to Emergency Medicine (SGEM) is part of the Free Open Access to Meducation movement (FOAMed). The SGEM tries to cut the knowledge translation window down from over ten years to less than one year with the power of social media. The ultimate goal is for patients to get the best care, based on the best evidence.
The FOAMed philosophy is that the information should be available to anyone, anytime, anywhere at no cost. This is similar to the philosophy of emergency medicine. It is the light in the house of medicine that is always on for anyone, at anytime, for anything. The SGEM has been free since it started in 2012 and will always be free open access.
Many of you have asked about getting CME credits for listening to the SGEM podcast and reading the SGEM blog. We know physicians (MD and DO), Nurse Practitioners (NP) and Physician Assistants (PA) have to collect so many CME credit hours for their respective professional organizations. This can be more challenging with the cancelation of in-person conferences and meetings.
The SGEM Hot Off the Press (SGEMHOP) episodes which are published once a month do offer CME credits. However, you can only claim these credits if you are a member of the Society of Academic Emergency Medicine (SAEM). This new initiative will allow anyone to claim CME credits for all of the SGEM episodes.
Getting CME credits for the weekly SGEM episodes is something I have been wanting to do for years. The barriers to getting CME credits for the SGEM before now was that it takes a lot of time and costs a lot of money to get accreditation. The cancellation of in-person conferences due to COVID19 has been the push I needed to finally get this service added to the SGEM.
This project has been made possible through a partnership with a Legend of Emergency Medicine, Dr. Richard Bukata, and his Center for Medical Education (CCME) company. CCME has been providing providing medical education in the form of audio programs and conferences since 1977. They have the infrastructure to provide this type of service. They also have an arrangement to get the CME credits at a very reasonable price.
Sign up by January 31st, 2021:
There can be only one...
The SGEM CME program offers up to 26 credits (1 credit hour per SGEM episode) over 6 months for only $195. If you sign up before January 31st, 2021 we will also give you 26 credits for free. This will be the previous six months of SGEM content that has already been approved for CME credits. Basically it is a 50% off promotion to kick start the SGEM CME program.
Signing up for your education credits is easy. This is because "there can be only one" subscription option. You can earn up to 26 credit hours in six months. The price is $195 ($7.50/credit hour) for the six months. Again, those that sign up by January 31st, 2021 will receive a bonus 26 CME credits for free. That makes it only $3.75 for every credit hour of SGEM content!. It is as easy as 1-2-3 to start earning your CME credit today. Just click on the picture of the Highlander for all the details.
SGEM Season#7 Book
The SGEM continues to grow and has approximately 51,000 subscribers. It would not be so successful without the wonderful people like you who listen to the podcast and read the blog every week.
I would also like to thank the SGEMHOP Team (Drs. Bond, Heitz and Morgenstern), PaperinaPic creator (Dr. Challen), all the guest skeptics and my best friend Chris Carpenter.
The SGEM Season#7 book was put together with the help of my daughter Sage Milne. She came up with the steam punk theme and drew all the artwork for the book. The cover art was inspired by the 1982 movie TRON. She knows very well how much I like 1980's movies and music. Sage is currently doing doing a degree in Global Health Studies at Huron University College.
Here are links to all six season of the SGEM as PDF books. You can download each season by clicking on the link: Season#1, Season#2, Season#3, Season #4, Season#5 and Season#6
If you are looking for the amazing theme music that helps with the KT for each SGEM episode, you can find them on Spotify. Most of the music comes from the 1980’s because it is clearly the best musical era.
New SGEMHOP Faculty
We are pleased to announce two new wonderful additions to the SGEM Hot Off the Press faculty: Dr. Kirsty Challen and Dr. Lauren Westafer. The SGEMHOP is done in collaboration with Society of Academic Emergency Medicine (SAEM). SAEM publishes the journal Academic Emergency Medicine (AEM).
SGEMHOP Five Step Process:
A paper that has been submitted, peer-reviewed, and ultimately accepted for publication in AEM is selected by the SGEMHOP faculty and made available free open access.
We put our skeptical eye upon the manuscript using a standard critical appraisal tool to probe the paper for its validity and publish an SGEM blog of the paper.
One of the authors of the paper is invited on the SGEM podcast, available on iTunes, Google Play and Spotify, as our guest to answer a number (5 or 10) nerdy questions to help us better understand the research.
The SGEM audience gets a chance to interact with the author by posting comments and question on the SGEM blog.
The best social media feedback will be published along with a summary of the SGEMHOP episode in a subsequent issue of AEM.
Dr. Kirsty Challen
Dr. Kirsty Challen (@KirstyChallen) is a Consultant in Emergency Medicine and Emergency Medicine Research Lead at Lancashire Teaching Hospitals Trust (North West England). She completed her undergraduate and postgraduate training in North West England, acquiring a History of Medicine BSc, and has a PhD in Health Services Research. She is Chair of the Royal College of Emergency Medicine Women in Emergency Medicine group, and involved with the RCEM Public Health and Informatics groups. Kirsty regards her inner toddler as a great asset to medicine and finds #FOAMEd very helpful in answering “but WHY?” She is also the creator of the wonderful infographics called #PaperinaPic. When not at work she is happiest out running on a muddy mountain.
Dr. Lauren Westafer
Lauren Westafer, DO, MPH, MS (she/her) is an Assistant Professor in the Department of Emergency Medicine at the University of Massachusetts Medical School - Baystate and Director of the Emergency Medicine Research Fellowship. Lauren is an implementation science researcher and FOAMed enthusiast. She is the author of the blog, The Short Coat, and cofounder of the emergency medicine podcast, FOAMcast. She lectures internationally on social media in medical education, critical appraisal and journal club design, pulmonary embolism, and advancing the quality of healthcare for LGBTQI+ patients. In addition, she serves as the Social Media Editor and a research methodology editor for Annals of Emergency Medicine and an Associate Editor for the NEJM Journal Watch Emergency Medicine.
We are very excited to have both of these talented physicians and educators as part of the SGEMHOP faculty.
The SGEM will be back next episode doing a structured critical appraisal of a recent publication and will continue trying and to cut the knowledge translation window down from over ten years to less than one year using the power of social media. Ultimately we want patients to get the best care, based on the best evidence.
One last thing. Could you please write a review of the SGEM podcast on iTunes, like the SGEM on Facebook and follow the SGEM on Twitter? Thank you for your ongoing support of the SGEM and all the best in 2021.
REMEMBER TO BE SKEPTICAL OF ANYTHING YOU LEARN, EVEN IF YOU HEARD IT ON THE SKEPTICS’ GUIDE TO EMERGENCY MEDICINE.

Jan 4, 2021 • 1h 33min
SGEM Xtra: EBM Master Class – McGill University Grand Rounds 2020
Date: January 4th, 2021
This is an SGEM Xtra episode. I had the honour of presenting at the McGill University Emergency Medicine Academic Grand rounds. They titled the talk "Evidence-Based Medicine Master Class". The presentation is available to watch on YouTube, listen to on iTunes and all the slides can be downloaded (McGill 2020 Part 1 and McGill 2020 Part 2).
Five Objectives:
Look at the burden of proof and talk about what is science
Discuss EBM and give a five step process of critical appraisal
Talk about biases and logical fallacies
Do a check list for randomized control trials
Record a live episode of the SGEM
1) Who has the Burden of Proof and What is Science?
Those making the claim have the burden of proof. It is called a burden because it hard - not because it is easy. We start with the null hypothesis (no superiority). Evidence is presented to convince us to reject the null and accept there is superiority to their claim. If the evidence is convincing we should reject the null. If the evidence is not convincing we need to accept the null hypothesis.
It is a logical fallacy to shift the burden of proof onto those who say they do not accept the claim. They do not have to prove something wrong but rather not be convinced that the claim is valid/“true” and this is an important distinction in epistemology.
What is science? It is the most reliable method for exploring the natural world. There are a number of qualities of science: Iterative, falsifiable, self-correcting and proportional.
What science isn’t is “certain”. We can have confidence around a point estimate of an observed effect size and our confidence should be in part proportional to the strength of the evidence. Science also does not make “truth” claims. Scientists do make mistakes, are flawed and susceptible to cognitive biases.
Physicians took on the image of a scientist by co-opting the white coat. Traditionally, scientists wore beige and physicians wore black to signify the somber nature of their work (like the clergy). Then came along the germ theory of disease and other scientific knowledge.
It was the Flexner Report in 1910 that fundamentally changed medical education and improved standards. You could get a medical degree in only one year before the Flexner Report. The white coat was now a symbol of scientific rigour separating physicians from “snake oil salesman”.
Many medical schools still have white coat ceremonies. However, only 1 in 8 physicians still report wearing a white lab coat today (Globe and Mail).
Science is usually iterative. Sometimes science takes giants leaps forward, but usually it takes baby steps. You probably have heard the phrase "standing on the shoulders of giants"? In Greek mythology, the blind giant Orion carried his servant Cedalion on his shoulders to act as the giant's eyes.
The more familiar expression is attributed to Sir Isaac Newton, "If I have seen further it is by standing on the shoulders of Giants.” It has been suggested that Newton may have been throwing shade at Robert Hooke.
Hooke was the first head of the Royal Society in England. Hooke was described as being a small man and not very attractive. The rivalry between Newton and Hooke is well documented. The comments about seeing farther because of being on the shoulders of giants was thought to be a dig at Hooke's short stature. However, this seems to be gossip and has not been proven.
Science is also falsifiable. If it is not falsifiable it is outside the realm/dominion of science. This philosophy of science was put forth by Karl Popper in 1934. A great example of falsifiability was the claim that all swans are white. All it takes is one black swan to falsify the claim. There are some philosophers that refute Popper's claim about falsifiability.
Science is self-correcting. Because science is iterative and falsifiable it is also self correcting. Science gets updated. We hopefully learn and get closer to the “truth” over time. Medical reversal is a thing and there is a great book and by Drs. Prasad and Cifu on this issue called Ending Medical Reversal: Improving Outcomes, Saving Lives.
The evidence required to accept a claim should be in part proportional to the claim itself. The classic example was given by the famous scientist Carl Sagan (astronomer, astrophysicist and science communicator). Did the TV series Cosmos and wrote a number of popular science books (The Dragons of Eden). Sagan made the claim that there was a “fire-breathing dragon that lives in his garage”.
The quality of evidence to convince you of something should be in part proportional to the claim being asserted. The summary is the famous quote by Carl Sagan that "extraordinary claims require extraordinary evidence".
Science does not make claims about the truth. It gives an approximation of the the best point estimate of the observed effect. It’s the best known method for exploring the natural world. Science has no agency but rather it is a process. However, scientists are flawed individuals who make mistakes. As Blaise Pascal said: "There is not such thing as the truth, we can only deliver the best available evidence and calculate a probability".
Real World Example:
Marik et al made the claim in 2016 that vitamin C cocktail (hydrocortisone, thiamine and vitamin C) could cure sepsis. He published a before and after observational study with 94 patients. The result was a 32% absolute decrease in mortality (NNT 3). We covered this study on SGEM#174: Don’t Believe the Hype – Vitamin C Cocktail for Sepsis. Dr. Jeremy Faust (FOAMCast) and I had eleven other skeptics comment on Dr. Marik's study. Our bottom line was that vitamin C, hydrocortisone and thiamine was associated with lower mortality in severe septic and septic shock patients in this one small, single centre retrospective before-after study but causation has yet to be demonstrated.
Higher-quality studies have since been published looking at they issue. Putzu et al had a SRMA of RCTs including critically ill patients (not just sepsis). They found no statistical difference in mortality. This was covered in SGEM#268: Vitamin C Not Ready for Graduation to Routine Use.
There has been a RCT published by Fujii et al in JAMA 2020. It specifically looked at 216 patients with septic shock and found no statistical mortality benefit to vitamin C.
Has the burden of proof been met that vitamin C is a cure for sepsis? I am not convinced by the available evidence. Note that this is different than claiming vitamin C does not work. That would shift the burden of proof. I am simple accepting the null hypothesis of no mortality superiority of vitamin C compared to placebo in septic patients.
It is ok to say "I don't know" if vitamin C works. It reminds me of a quote from Dr. Richard Feynman. I have degrees of confidence or certainty about various positions. These positions are tentative and subject to change. I am not absolutely certain about anything. To be absolutely certain could be considered a logical fallacy (nirvana fallacy). Logical fallacies will be discussed later.
2) Evidence-Based Medicine and a Five Step Process to Critical Appraisal
This was defined by Dr. David Sackett over 20 years ago (Sackett et al BMJ 1996). He defined EBM as “The conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients.” I really like this definition, and the only tweak I would have added would be to include the word "shared".
The definition of EBM can be visually displayed as a Venn diagram. There are three components: The literature, our clinical judgement, and the patients values/preferences.
Many people make the mistake of thinking that EBM is just about the scientific literature. This is not true. You need to know about the relevant scientific information. The literature should inform our care but not dictate our care.
Clinical judgement is very important. Sometimes you will have lots of experience and other times you may have very limited experience.
The third component of EBM is the patient. We need to ask them what they value and prefer. The easiest way to do this is to ask the patient. It should start with patients care and it ends with patient care. We all want patients to get the best care, based on the best evidence.
Levels of Evidence:
There is a hierarchy to the evidence and we want to use the best evidence to inform our patient care. The levels of evidence is usually described using a pyramid. The lowest level is expert opinion. the middle of the hierarchy is a randomized control trial and the top is considered a systematic review.
The systematic review +/- a meta-analysis is put on the top of the EBM level of evidence pyramid. However, we need to watch out for garbage in, garbage out (GIGO). This means if you take a number of crappy little studies (CLS), mash them all up into a meat grinder and spit out a point estimate down to the 5th decimal place that results is some impressive p-value is an illusion of certainty when certainty does not exist.
EBM Limitations:
Harm and the parachutes argument - Smith and Pell BMJ 2003, Hayes et al CMAJ 2018, and Yeh et al BMJ 2018
Most published research findings are false - Ioannidis PLoS 2005
Guidelines are just cookbook medicine
Good evidence is ignored
Too busy for EBM
Five Alternatives to EBM:
This was adapted from a paper by Isaacs and Fitzgerald BMJ 1999. To paraphrase Sir Winston Churchill, EBM is the worst form of medicine except for all the others that have been tried.
Eminence Based Medicine - The more senior the colleague, the less importance he or she placed on the need for anything as mundane as evidence. Experience, it seems, is worth any amount of evidence.

Dec 26, 2020 • 32min
SGEM#313: Here Comes A Regular to the ED
Date: December 18th, 2020
Reference: Hulme et al. Mortality among patients with frequent emergency department use for alcohol-related reasons in Ontario: a population-based cohort study. CMAJ 2020
Guest Skeptic: Dr. Hasan Sheikh is an emergency and addictions physician in Toronto and a lecturer at the University of Toronto. He holds a Masters in Public Administration from the Harvard Kennedy School of Government.
Hasan was on an SGEM Xtra last year discussing the Canadian Association of Emergency Physician's (CAEP) position statement on Dental care in Canada.
"The Canadian Association of Emergency Physicians believes that every Canadian should have affordable, timely, and equitable access to dental care."
CAEP has put out other position statements. The most recent is on sick notes for minor illness. For a list of other positions statements from CAEP click on this LINK.
Case: A 45-year-old male with no fixed address is found by a bystander with decreased level of consciousness (LOC) on the street. Emergency Medical Services (EMS) is called, and the patient is brought to the emergency department (ED). An empty bottle of vodka is found on the patient, and the decreased LOC is suspected to be due to alcohol intoxication. It is the patient’s fifth visit to the ED in the last two weeks with a similar presentation. The patient is observed over many hours, their LOC improves, and they are discharged after demonstrating that they can ambulate safely.
Background: A leading driver of morbidity and mortality worldwide is alcohol (1). Alcohol consumption is attributed to approximately 5% of all global deaths. This works out to an estimated 3 million deaths due to alcohol (2).
Alcohol was the single greatest risk factor for ill health worldwide among people aged 15–49 years according to the 2016 Global Burden of Disease Study (3). There are more hospital admissions in Canada for alcohol-attributable conditions than for myocardial infarction (4).
There is a cost associated with alcohol related harms. In Canada, that number is around $14.6 billion a year with $3.3 billion in health care costs (5). Alcohol related ED visits has also increased more than four times greater than the overall rate of ED visits (6).
This trend of increasing alcohol related ED visits is not unique to Canada. It has also been reported in England, Australia and the US (7-9).
Clinical Question: What is the one-year overall mortality rate for adults with frequent visits to the ED for alcohol related reasons?
Reference: Hulme et al. Mortality among patients with frequent emergency department use for alcohol-related reasons in Ontario: a population-based cohort study. CMAJ 2020
Population: Adults aged 16-105 years of age who made frequent ED visits for alcohol related reasons (two or more ED visits in a year).
Excluded: Data inconsistencies, not Ontario residents, Age < 16 or > 105 or death at discharge
Exposure: Patients with ED visits for alcohol-related mental and behavioural disorders, using the ICD-10-CA code of F10. This includes simple intoxication and withdrawal
Comparison: Comparisons were made between groups of frequent ED users for alcohol-related reasons, including those that visited the ED twice in a year, 3-4 times in a year, and greater than four times in a year
Outcome:
Primary Outcome: One-year mortality, adjusted for age, sex, income, rural residence, and presence of co-morbidities
Secondary Outcomes: Mental and behavioural disorders, diseases of the circulatory system, diseases of the digestive system, and external causes of morbidity and mortality (e.g., accidents, including accidental poisoning, accidental injuries, injuries, intentional self-harm, assault) with frequency >5%. Cause of death using alcohol-attributable ICD-10-CA codes as well as ICD-10-CA codes for death by suicide.
Authors’ Conclusions: “We observed a high mortality rate among relatively young, mostly urban, lower-income people with frequent emergency department visits for alcohol-related reasons. These visits are opportunities for intervention in a high-risk population to reduce a substantial mortality burden.”
Quality Checklist for Observational Study:
Did the study address a clearly focused issue? Yes
Did the authors use an appropriate method to answer their question? Yes
Was the cohort recruited in an acceptable way? Yes
Was the exposure accurately measured to minimize bias? Unsure
Was the outcome accurately measured to minimize bias? Yes
Have the authors identified all-important confounding factors? Unsure
Was the follow up of subjects complete enough? Yes
How precise are the results? Fairly precise
Do you believe the results? Yes
Can the results be applied to the local population? Unsure
Do the results of this study fit with other available evidence? Yes
Key Results: They identified 160,170 alcohol-related ED visits that had at least one more alcohol-related visit in the 1-year time frame. This represented a cohort of 25,813 patients. The median age was 45 years, two-thirds male, 88% urban, 59% arrived by ambulance and 13% were admitted to hospital on their index case.
Increasing ED visits was associated with an increased all-cause mortality
The all-cause one-year mortality rate was 5.4% overall, ranging from 4.7% among patients with 2 visits to 8.8% among those with 5 or more visits. Death due to external causes (e.g., suicide, accidents) was most common.
The data could also be represented in years of potential life lost (YPLL). This showed the all-cause one-year mortality was 121 YPLL overall, ranging from 97 YPLL among patients with two visits to 231 YPLL among those with five or more visits.
Listen to the podcast to hear Hasan answer my five nerdy questions.
1) Associations: The most obvious limitation to this study is the retrospective observational nature of the study. The multiple visits for alcohol use may be a surrogate marker for something else that is causing the increase observed in mortality. You did adjust for age, sex, income, rural residence, presence of comorbidities. However, there could be other unmeasured confounders responsible for the observed associated increase in all-cause mortality.
2) Validation: The use of ICD-10-CA code F10 to ascertain alcohol use disorders among patients presenting to the emergency department has not been validated. This does not mean it is not valid but should be interpreted with extra caution.
3) Interventions: It is mentioned in the conclusions that effective interventions have the potential to prevent premature mortality and reduce hospital use. In the introduction you state a systematic review suggests that screening and brief intervention for alcohol-related problems in the ED is a promising approach for reducing problematic alcohol consumption. This was a publication from 2002 (D’Onofrio and Degutis AEM). Is there high-quality evidence for effective interventions that prevent mortality in these high-risk individuals.
4) Rural Areas: More than 10% of the cohort was from rural areas. This will make access to Rapid Access Addiction Medicine clinics more difficult. Does identifying these high-risk individuals make a difference if they cannot obtain the services?
5) Comparison: This data set was not compared to other chronic conditions (diabetes, COPD, CHF, etc.) that present frequently do the ED. It would have been interesting to know if the mortality rate is higher, lower or the same for people who present multiple times to the ED in one-year due to alcohol related reasons.
Comment on Authors’ Conclusion Compared to SGEM Conclusion: We generally agree with the authors’ conclusions.
SGEM Bottom Line: Be aware that patients presenting frequently to the ED with alcohol related issues have an associated high risk of mortality in the next year.
Case Resolution: You approach the patient with concerns over his multiple alcohol related ED visits and offer support in the department and at discharge.
Offer referral/support to low-barrier addiction medicine services ex. RAAM clinic
Offer anti-craving medications: naltrexone if no contraindications (especially no opiate use in the last 10 days) – start at 50 mg po daily x 1 week
Consider additional anti-craving medications: gabapentin 300-600 mg po three-times a day x 1 week to help with cravings/reduce withdrawal symptoms
Hope to engage people in care by taking this approach
Dr. Hasan Sheikh
Clinical Application: Have a high index of suspicion for this population. Ensure alcohol withdrawal is adequately treated in the ED. Get comfortable with anti-craving medications, including naltrexone and gabapentin. Familiarize yourself with the resources in your community, especially Rapid Access Addiction Medicine clinics.
What Do I Tell My Patient? I see that you have had multiple visits over the last year. We are worried about you and want to get you the care you need and deserve. I can offer you some medications that could help. Would you also be interested in knowing more about our special Rapid Access Addiction Medicine clinics?
Keener Kontest: Last weeks’ winner was Joshua McGough. Josh is a 3rd year medical student. He knew the word influenza comes from the Latin "Influentia" which translates to "Influence". It refers to the idea that the disease was attributed to the influence of the stars!
Listen to the SGEM podcast to hear this weeks’ question. Send your answer to TheSGEM@gmail.com with “keener” in the subject line. The first correct answer will receive a cool skeptical prize.
Remember to be skeptical of anything you learn, even if you heard it on the Skeptics’ Guide to Emergency Medicine.
Additional Resources:
References:
Rehm J, Mathers C, Popova S, et al.

Dec 23, 2020 • 1h 1min
SGEM Xtra: Relax – Damm It!
Date: December 21st, 2020
Professor Tim Caulfield
This is a SGEM Xtra book review. I had the pleasure of interviewing Professor Timothy Caulfield. Tim is a Canadian professor of law at the University of Alberta, the Research Director of its Health Law Institute, and current Canada Research Chair in Health Law and Policy. His area of expertise is in legal, policy and ethical issues in medical research and its commercialization.
Tim came on the SGEM and discussed his new book called Relax, Dammit! A User's Guide to the Age of Anxiety. Listen to the podcast to hear us discuss his new book, skepticism, and science communication in general.
The SGEM has a global audience with close to 45,000 subscribers. Many of the SGEMers live in the US and Tim's book has a different title in America. It is called Your Day Your Way: The Facts and Fictions Behind Your Daily Decisions. Tim gives some insight on the podcast why there is a different title in Canada and the US.
Tim and I met in 2015 at the Canadian Associate of Emergency Physicians (CAEP) Annual Conference in Edmonton. He was a keynote speaker and discussed his previous book Is Gwyneth Paltrow Wrong about Everything? How the Famous Sell Us Elixirs of Health, Beauty & Happiness. Tim gave a fantastic presentation. I was in Edmonton talking nerdy as part of the CAEP TV initiative. We have been in contact via social media ever since trying to improve science communication.
Besides writing books, Tim has stared in his own Netflix series called: A User guide to Cheating Death. He has also collaborated Dr. Jennifer Gunter who wrote the book The Vagina Bible. Dr. Gunter visited BatDoc a few years ago for an SGEM Xtra extra episode.
A Few of Professor Caulfield's academic publications:
Commentary: the law, unproven CAM and the two‐hats fallacy. Focus on Alternative and Complementary Therapies, 17: 4-8.
Stem cell hype: Media portrayal of therapy translation. Science Translational Medicine.11 Mar 2015: Vol. 7, Issue 278, pp. 278ps4
Injecting doubt: responding to the naturopathic anti-vaccination rhetoric. Journal of Law and the Biosciences, Volume 4, Issue 2, August 2017, Pages 229–249
COVID-19 and ‘immune boosting’ on the internet: a content analysis of Google search. BMJ Open 2020;10:e040989.
Previous books reviewed on the SGEM:
Jeanne Lenzer The Danger Within Us: America's Untested, Unregulated Medical Device Industry and One Man's Battle to Survive It.
Dr. Steven Novella Skeptics Guide to the Universe: How to Know What's Really Real in a World Increasingly Full of Fake.
Dr. Brian Goldman The Power of Kindness: Why Empathy is Essential in Everyday Life
Tim's new book Relax Dammit! is organized into the day in the life of Tim Caulfield. It discusses the science behind our daily activities. On the podcast Tim provides five examples that he thinks might be interesting to the SGEM audience. This includes: Breakfast, coffee, commuting to work, napping and raw milk.
I hope you like this type of SGEM Xtra. Let me know what you think and I will consider doing more book reviews with authors if the feedback is positive.
The SGEM will be back episode with a structured critical review of a recent publication trying to cut the knowledge translation window down from over ten years to less than one year.
Remember to be skeptical of anything you learn, even if you heard it on the Skeptics’ Guide to Emergency Medicine.

Dec 19, 2020 • 25min
SGEM#312: Oseltamivir is like Bad Medicine – for Influenza
Date: December 16th, 2020
Reference: Butler et al. Oseltamivir plus usual care versus usual care for influenza-like illness in primary care: an open-label, pragmatic, randomised controlled trial. The Lancet 2020
Guest Skeptic: Dr. Justin Morgenstern is an emergency physician and the creator of the #FOAMed project called First10EM.com. He has a great new blog post about how we are failing to protect our healthcare workers during COVID-19.
Case: A 45-year-old female presents to her primary care clinician complaining of fever, sore throat and muscle aches. She did not get a flu shot this year. You diagnose her with an influenza-like illness (ILI). She wants to know if taking an anti-viral like oseltamivir (Tamiflu) will help?
Background: We covered oseltamivir six years ago in SGEM#98. This is still the longest Cochrane review (300+ pages) I have ever read (Jefferson et al 2014a). The overall bottom line was when balancing potential risks and potential benefits, the evidence does not support routine use of neuraminidase inhibitors like oseltamivir for the treatment or prevention of influenza in any individual.
There has been some controversy around oseltamivir. It was approved by licensing agencies and promoted by the WHO based on unpublished trials. None of those agencies had actually looked at the unpublished data. In fact, the primary authors of key oseltamivir trials had never been given access to the data – Roche just told them what the data supposedly said. Other papers were ghost-written (Cohen 2009). The BMJ was involved in a legal battle with Roche for half a decade trying to get access to that information. When they finally got their hands on the data, the conclusions of the reviews suddenly changed. After countries had spent billions stockpiling the drug, it turned out that oseltamivir had no effect on influenza complications, was not effective in prophylaxis, and had significantly more harms than originally reported (Jefferson 2014a; Jefferson 2014b). You can read more details about this controversy in the BMJ.
The oseltamivir issue is a great example of the problems with conflicts of interest (COI) in medical research. This is something I have spoken about often. It is not an ad hominem attack on any of the authors. Our current system of medical research involves industry funding. COIs are just another data point that needs to be considered. This is because the evidence shows COIs can introduce bias into RCTs, SRMA and Clinical Practice Guidelines. When I use the term bias I am referring to something that systematically moves us away from the “truth”.
There is specific evidence of bias in the oseltamivir literature. Dunn and colleagues looked at 37 assessments done in 26 systematic reviews and then compared their conclusions to the financial conflicts of interest of the authors. Among eight assessments where the authors had conflicts, seven (88%) had favourable conclusions about neuraminidase inhibitors. However, among the 29 assessments that were made by authors without conflicts, only five (17%) were positive (Dunn et al 2014).
The current best evidence shows that oseltamivir (Jefferson et al 2014a):
Decreased time to first alleviation of symptoms by less than one day
Does not statistically change hospital admission rate (1.7% vs 1.8%)
Does increase nausea (NNH 28) and vomiting (NNH 22)
Does increase neuropsychiatric events (NNH 94)
Does increase headaches (NNH 32)
Clinical Question: Does oseltamivir improve time to recovery in patients presenting to their primary care clinician with an influenza-like illnesses?
Reference: Butler et al. Oseltamivir plus usual care versus usual care for influenza-like illness in primary care: an open-label, pragmatic, randomised controlled trial. The Lancet 2020.
Population: Patients from 15 European countries over three influenza seasons who were one year of age and older and who presented to their primary care clinician with symptoms of influenza-like illness (ILI). ILI was defined as a “sudden onset of self-reported fever, with at least one respiratory symptom (cough, sore throat, or running or congested nose) and one systemic symptom (headache, muscle ache, sweats or chills, or tiredness), with symptom duration of 72 h or less during a seasonal influenza epidemic.”
Exclusions: Chronic renal failure, substantial impaired immunity, patients in whom the treating clinician thought Tamiflu or admission to hospital was required, allergy, planned general anesthesia in the next two weeks, life expectancy less than six months, severe hepatic impairment, requirement for any live viral vaccine in the next seven days, and in some jurisdictions pregnant or lactating women.
Intervention: Oseltamivir (Tamiflu)
75 mg by mouth twice daily for five days in adults and children more than 40 kg.
For children (13 years or younger), oral suspension was given according to weight (children weighing 10–15 kg received 30 mg, >15–23 kg received 45 mg, >23–40 kg received 60 mg, and >40 kg received 75 mg).
Comparison: Usual care
Outcome:
Primary Outcome: Patient reported time to recovery based on daily symptom journals. Recovery was defined as having returned to usual daily activity and fever, headache, and muscle ache were rated as minor or no problem in key subgroups.
Secondary Outcomes: Cost-effectiveness, hospital admissions, complications related to ILI, repeat attendance in general practice, time to alleviation of symptoms of ILI, incidence of new or worsening symptoms, time to initial reduction in severity of symptoms, use of additional symptomatic and prescribed medication, including antibiotic, transmission of infection within household, self-management of symptoms of ILI and adverse events/harms.
Authors’ Conclusions: “Primary care patients with influenza-like illness treated with oseltamivir recovered one day sooner on average than those managed by usual care alone. Older, sicker patients with comorbidities and longer previous symptom duration recovered 2–3 days sooner.”
Quality Checklist for Randomized Clinical Trials:
The study population included or focused on those in the emergency department. No
The patients were adequately randomized. Yes
The randomization process was concealed. Yes
The patients were analyzed in the groups to which they were randomized. Yes
The study patients were recruited consecutively (i.e. no selection bias). Unsure
The patients in both groups were similar with respect to prognostic factors. Yes
All participants (patients, clinicians, outcome assessors) were unaware of group allocation. No
All groups were treated equally except for the intervention. Yes
Follow-up was complete (i.e. at least 80% for both groups). Yes
All patient-important outcomes were considered. No
The treatment effect was large enough and precise enough to be clinically significant. No
Key Results: They enrolled 3,266 people from 15 European countries over three influenza seasons. Slightly more than half (52%) had a PCR-confirmed influenza infection.
Primary Outcome: Time to recovery
Mean benefit from oseltamivir was 1.02 days (95% [BCrI] 0.74 to 1.31)
Some people may not have heard of the Bayesian Credible Interval (BCrI). It’s very much like the 95% confidence interval we talk about but reflects the fact that there is a big difference between Bayesian statistics and frequentist statistics. Bayesian statistics simply tell us that prior probability matters.
Secondary Outcomes:
No statistical differences identified in patient-reported repeat visits with health-care services, hospitalisations, x-ray confirmed pneumonia, or over-the-counter use of medication containing acetaminophen or ibuprofen
More nausea or vomiting in the intervention group compared to usual group (21% vs. 16%)
1) Conflicts of Interest (COIs): Multiple authors reported COIs with Roche (make or oseltamivir). We already talked about this issue in the background material. We do not consider COI necessarily as a negative but rather a potential source of bias that needs to be considered when interpreting the literature. There is a good review on this issue of COI and reducing bias by Bradley et al in the JRSM 2020.
2) Blinding: The lack of blinding and the fact that the primary outcome is subjective are major limitations of this trial. With that combination, we expect significant bias. We expect that the patients given the fancy pill to think they are getting better (placebo effect), while the patient who were given nothing will have no impact or even feel worse.
In my mind, there is really no reason to design the trial this way. The authors say that they “deliberately chose to do an open-label trial in the context of everyday practice, because effect sizes identified by placebo-controlled, efficacy studies with tight inclusion criteria might not be reproduced in routine care. We also wished to estimate time to patient reported recovery from the addition of an antiviral agent to usual care rather than benefit from oseltamivir treatment compared with placebo.”
The logic here seems to be completely backwards. There is certainly a role for real world trials, because treatments often look worse in the real world, when medications are not always taken and patients are not so tightly selected. However, the existing evidence for oseltamivir is weak, so a trial designed to see a worse outcome in a real-world setting doesn’t make a lot of sense. More importantly, the desire to study oseltamivir combined with usual care has nothing to do with using a placebo or properly blinding a trial. There are many trials that compare usual care plus a treatment to usual care plus placebo. Deciding to make the trial unblinded simply introduces unnecessary bias.
Interestingly,

Dec 12, 2020 • 33min
SGEM#311: Here We Go Loop De Loop to Treat Abscesses
Date: December 10th, 2020
Reference: Ladde et al. A Randomized Controlled Trial of Novel Loop Drainage Technique Versus Drainage and Packing in the Treatment of Skin Abscesses. AEM December 2020
Guest Skeptic: Dr. Kirsty Challen (@KirstyChallen) is a Consultant in Emergency Medicine and Emergency Medicine Research Lead at Lancashire Teaching Hospitals Trust (North West England). She is Chair of the Royal College of Emergency Medicine Women in Emergency Medicine group and involved with the RCEM Public Health and Informatics groups. Kirsty is also the creator of the wonderful infographics called #PaperinaPic.
Case: A 52-year-old previously healthy woman presents to your emergency department (ED) with an abscess on her left forearm. She is systemically well and there is no sign of tracking, so you decide to perform incision and drainage in the ED. When you ask your nursing colleague to set up the equipment, he wants to know if you will be using standard packing or a vessel loop drainage technique.
Background: We have covered the issue of abscesses multiple times on the SGEM. Way back in 2012 we looked at packing after incision and drainage (I&D) on SGEM#13 and concluded routine packing might not be necessary.
Another topic covered was whether irrigating after I&D was superior to not irrigating (SGEM#156). The bottom line from that critical appraisal was that irrigation is probably not necessary.
Chip Lange (PA)
The use of antibiotics after I&D is another treatment modality that has been debated over the years. Chip Lange and I interviewed Dr. David Talan about his very good NEJM randomized control trial on SGEM#164. The bottom line was that the addition of TMP/SMX to the treatment of uncomplicated cutaneous abscesses represents an opportunity for shared decision-making.
One issue that has not been covered yet is the loop technique. This is when one or multiple vessel loops are put through the abscess cavity. This is done by making a couple of small incisions. An advantage to this technique over packing (which is not necessary) is that the Vessel loops do not need to be changed or replaced.
Clinical Question: In uncomplicated abscesses drained in the ED, does the LOOP technique reduce treatment failure?
Reference: Ladde et al. A Randomized Controlled Trial of Novel Loop Drainage Technique Versus Drainage and Packing in the Treatment of Skin Abscesses. AEM December 2020
Population: Patients of any age undergoing ED drainage of skin abscesses
Exclusions: Patient with abscess located on hand, foot, or face or if they required admission and/or operative intervention.
Intervention: LOOP technique where a vessel tie is left in situ
Comparison: Standard packing with sterile ribbon gauze
Outcome:
Primary Outcome: Treatment failure (need for a further procedure, IV antibiotics or operative intervention), assessed at 36 hours.
Secondary Outcomes: Ease of procedure, pain at the time of treatment, ease of care at 36 hours, pain at 36 hours.
Dr. Ladde
This is an SGEMHOP episode which means we have the lead author on the show. Dr. Ladde is in an active academic emergency physician working at Orlando Regional Medical Center serving as core faculty and Senior Associate Program Director. Jay also has the rank Professor of Emergency Medicine for University of Central Florida College of Medicine.
Authors’ Conclusions: “The LOOP and packing techniques had similar failure rates for treatment of subcutaneous abscesses in adults, but the LOOP technique had significantly fewer failures in children. Overall, pain and patient satisfaction were significantly better in patients treated using the LOOP technique.”
Quality Checklist for Randomized Clinical Trials:
The study population included or focused on those in the emergency department. Yes
The patients were adequately randomized. Yes
The randomization process was concealed. Yes
The patients were analyzed in the groups to which they were randomized. Unsure
The study patients were recruited consecutively (i.e. no selection bias). No
The patients in both groups were similar with respect to prognostic factors. Yes
All participants (patients, clinicians, outcome assessors) were unaware of group allocation.
All groups were treated equally except for the intervention. Yes
Follow-up was complete (i.e. at least 80% for both groups). Yes
All patient-important outcomes were considered. Yes
The treatment effect was large enough and precise enough to be clinically significant. No
Key Results: They recruited 256 participants into the trial with 90% (196) having outcome data. The mean age was 22 years, 71% were thought to also have cellulitis and 83% (213/256) received antibiotics at discharge. More than 80% of those prescribed antibiotics were given the combination of cephalexin and TMX-SMP.
No statistical difference in treatment failure between loop technique and packing.
Primary Outcome: Treatment failure
20% (95% CI 12-28%) in packing group vs. 13% (6-20%) LOOP group; p=0.25.
Secondary Outcomes:
Treatment Failure in Children: 21% (8-34%) in packing group vs 0% LOOP group p=0.002.
Ease and pain of procedure were the same, but ease of care and pain over 36 hours and patient satisfaction at 10 days were improved in the LOOP group
We have five nerdy questions for Jay. Listen to the podcast on iTunes to hear his responses.
1) Old Data: This study was conducted from March 14, 2009, until April 10, 2010. Why delay and do you think the results are still valid today?
2) Convenience Sample: You only recruited when the research team was available. This is a common limitation seen in EM research. Did you manage to cover the whole working week adequately?
3) Children: You did a subgroup analysis of children. This was not pre-planned and should be considered hypothesis generating. Why do you think they appear to have responded differently and have you tried to confirm this result?
4) Blinding: We appreciate it can be difficult to blinding the clinician and patient to treatment allocation. However, would it have been possible to blind the outcome assessors? The clinician could have removed the packing or loop and then a research assistant could have assessed the outcome blinded to treatment modality.
5) Comparison Group: You compared this to ribbon packing. We have evidence that this is not necessary (SGEM#13). Have you considered repeating the trial and investigating the LOOP technique and comparing it to not packing the abscess?
Comment on Authors’ Conclusion Compared to SGEM Conclusion: We agree with the authors’ conclusions about failure rates in adults, and pain and satisfaction overall. We are more cautious about the reduced failure rate in children and think this has room for further exploration.
SGEM Bottom Line: Consider putting in a vessel LOOP on your next uncomplicated abscess.
Case Resolution: You ask the nurse to set up for the LOOP technique and the patient leaves after the procedure with follow up planned.
Dr. Kirsty Challen
Clinical Application: Using the LOOP technique can result in less pain and easier care for the patient in the 36 hours following I&D.
What Do I Tell My Patient? There are two techniques for draining an abscess, which have similar failure rates, but leaving a small piece of rubber through the wound rather than filling it with cloth ribbon makes it more comfortable over the next 36 hours.
Keener Kontest: Last weeks’ winner was Dr. Matt Runnalls. He is an EM physician from Cambridge, Ontario. Matt knew 3.2% of women over the age of 65, (1.9% of all people over the age of 65) present to the ED with dizziness/vertigo according to the 2017 NHAMCS database.
Listen to the SGEM podcast to hear this weeks’ question. Send your answer to TheSGEM@gmail.com with “keener” in the subject line. The first correct answer will receive a cool skeptical prize.
SGEMHOP: Now it is your turn SGEMers. What do you think of this episode on the loop technique to treat abscesses the ED? Tweet your comments using #SGEMHOP. What questions do you have for Jay and his team? Ask them on the SGEM blog. The best social media feedback will be published in AEM.
Also, don’t forget those of you who are subscribers to Academic Emergency Medicine can head over to the AEM home page to get CME credit for this podcast and article. We will put the process on the SGEM blog:
Go to the Wiley Health Learning website
Register and create a log in
Search for Academic Emergency Medicine – “December”
Complete the five questions and submit your answers
Please email Corey (coreyheitzmd@gmail.com) with any questions or difficulties.
Remember to be skeptical of anything you learn, even if you heard it on the Skeptics’ Guide to Emergency Medicine.

Dec 5, 2020 • 28min
SGEM#310: I Heard A Rumour – ER Docs are Not Great at the HINTS Exam
Date: November 30th, 2020
Reference: Ohle R et al. Can Emergency Physicians Accurately Rule Out a Central Cause of Vertigo Using the HINTS Examination? A Systematic Review and Meta-analysis. AEM 2020
Guest Skeptic: Dr. Mary McLean is an Assistant Program Director at St. John’s Riverside Hospital Emergency Medicine Residency in Yonkers, New York. She is the New York ACEP liaison for the Research and Education Committee and is a past ALL NYC EM Resident Education Fellow.
Case: A 50-year-old female presents to your community emergency department in the middle of the night with new-onset constant but mild vertigo and nausea. She has nystagmus but no other physical exam findings. You try meclizine, ondansetron, valium, and fluids, and nothing helps. Her head CT is negative (taken 3 hours after symptom onset). You’re about to call in your MRI tech from home, but then you remember reading that the HINTS exam is more sensitive than early MRI for diagnosis of posterior stroke. You wonder, “Why can’t I just rule out stroke with the HINTS exam? How hard can it be?” You perform the HINTS exam and the results are reassuring, but the patient’s symptoms persist…
Background: Up to 25% of patients presenting to the ED with acute vestibular syndrome (AVS) have a central cause of their vertigo - commonly posterior stroke. Posterior circulation strokes account for approximately up to 25% of all ischemic strokes [1]. MRI diffuse-weighted imagine (DWI) is only 77% sensitive for detecting posterior stroke when performed within 24h of symptom onset [2,3]. As an alternative diagnostic method, the HINTS exam was first established in 2009 to better differentiate central from peripheral causes of AVS [4].
But what is the HINTS exam? It’s a combination of three structured bedside assessments: the head impulse test of vestibulo-ocular reflex function, nystagmus characterization in various gaze positions, and the test of skew for ocular alignment. When used by neurologists and neuro-ophthalmologists with extensive training in these exam components, it has been found to be nearly 100% sensitive and over 90% specific for central causes of AVS [5-8].
Over the past decade, some emergency physicians have adopted this examination into their own bedside clinical assessment and documentation. We’ve used it to make decisions for our patients, particularly when MRI is not readily available. We’ve even used it to help decide whether or not to get a head CT.
But we’ve done this without the extensive training undergone by neurologists and neuro-ophthalmologists, and without any evidence that the HINTS exam is diagnostically accurate in the hands of emergency physicians.
Clinical Question: Can emergency physicians accurately rule out a central cause of vertigo using the HINTS examination?
Reference: Ohle R et al. Can Emergency Physicians Accurately Rule Out a Central Cause of Vertigo Using the HINTS Examination? A Systematic Review and Meta-analysis. AEM 2020
Population: Adult patients presenting to an ED with AVS
Exclusions: Non-peer-reviewed studies, unpublished data, retrospective studies, vertigo which stopped before or during workup, incomplete HINTS exam, or studies with data overlapping with another study used
Intervention: HINTS examination by emergency physician, neurologist, or neuro-ophthalmologist
Comparison: CT and/or MRI
Outcome: Diagnosis of HINTS examination for central cause for AVS (i.e., posterior stroke)
Authors’ Conclusions: “The HINTS examination, when used in isolation by emergency physicians, has not been shown to be sufficiently accurate to rule out a stroke in those presenting with AVS.”
Quality Checklist for Systematic Review Diagnostic Studies:
The diagnostic question is clinically relevant with an established criterion standard. Unsure
The search for studies was detailed and exhaustive. Yes
The methodological quality of primary studies were assessed for common forms of diagnostic research bias. Yes
The assessment of studies were reproducible. Yes
There was low heterogeneity for estimates of sensitivity or specificity. No
The summary diagnostic accuracy is sufficiently precise to improve upon existing clinical decision-making models. Unsure
Key Results: They searched multiple electronic databases with no language or age restrictions and the gray literature. The authors identified 2,695 citations with five articles meeting inclusion criteria and a total of 617 patients.
There were no studies that included only emergency physicians performing the HINTS examination.
Essentially, the authors separated the studies into two cohorts according to the medical specialties of the HINTS examiners, and for each cohort they reported the sensitivity and specificity of the HINTS exam for diagnosis of posterior stroke.
The first cohort included neurologists and neuro-ophthalmologists. The sensitivity and specificity of the HINTS examination were 96.7% (95% CI; 93.1 to 98.5) and 94.8% (95% CI; 91 to 97.1).
In contrast, the second cohort (only one study) included emergency physicians and neurologists. The sensitivity and specificity were much lower at 83.3% (95% CI; 63.1 to 93.6) and 43.8% (95% CI; 36.7 to 51.2).
From these results, it was deduced that emergency physicians’ participation in the latter cohort resulted in the reduced diagnostic accuracy.
They did not combine the five studies into one summary result due to the heterogeneity of the included studies which was >40%.
1) Available Studies: Unfortunately, there were only five studies meeting the inclusion criteria, for a total of 617 patients. This is a known limitation of systematic reviews. Authors are limited by the available studies.
2) Biases: On the QUADAS-2 assessment, four of these studies had at least one component at high risk of bias, and three studies had unclear reports on at least one component, meaning that quality was low. Adherence to the STARD reporting guidelines was mediocre to poor overall because only two of the studies reported on most of the items in the guidelines. We will put the figure that represents the risk of bias of the included studies in the show notes.
The reference standard (index test) used in these studies for all patients recruited was CT or MRI. We know CT imaging has a low sensitivity for posterior. One of the studies allowed negative head CT alone as adequate imaging to rule out posterior stroke. With such low sensitivity of CT imaging for posterior strokes, this crucial diagnosis can be missed. Even with MRI-DWI, only has a reported sensitivity of 77%.
This problem in diagnostic testing studies is called the Imperfect Gold Standard Bias (Copper standard bias): It can happen if the “gold" standard is not that good of a test.
Another bias identified was partial verification bias (referral or workup bias). This happens when only a certain set of patients who were suspected of having the condition are verified by the reference standard (CT or MRI). So, the AVS patients with suspected strokes with a positive HINTS exam were more likely to get advanced neuroimaging than those with a negative HINTS exam. This would increase sensitivity but decrease specificity.
It is unknown if the original studies included consecutive patients or a convenience sample of patients. The later could introduce spectrum bias. Sensitivity can depend on the spectrum of disease, while specificity can depend on the spectrum of non-disease. Four out of the five studies had the ED physician identifying the patients for a referral. If patients with indeterminate or ambiguous presentations (rather than all patients presenting with AVS) were excluded this could falsely raise sensitivity.
Because there were few studies it made assessing publication bias difficult.
For those interested in understanding the direction of bias in studies of diagnostic test accuracy there is a fantastic article by Kohn et al AEM 2013. There is also a good book by Dr. Pines and colleagues on the topic
3) Heterogeneity: The authors used the I2 statistic to represent heterogeneity. Our overall I2 values are 53% for sensitivity and 94% for specificity likely representing moderate and considerable heterogeneity, respectively. Notably, for neurologists and neuro-ophthalmologist cohort alone, the I2 was noted to be 0, representing low or negligible heterogeneity [9].
4) Precision and Reliability: There is poor precision overall - specifically for the cohort of emergency physicians with neurologists, the 95% confidence intervals were very wide for sensitivity (83%; 95% CI 63 to 94) and specificity (44%; 95% CI 37 to 51).
The HINTS exam cannot yet be relied upon by emergency physicians as a bedside tool to rule out stroke. We simply do not have the evidence to support its adequacy as a diagnostic tool in the hands of emergency physicians, and in fact we may now have a “hint” of evidence to the contrary. I get it. We all got so excited in 2009 when we read about the HINTS exam and how well it worked for neurologists and neuro-ophthalmologists. The idea of it was spellbinding and almost hypnotic – a bedside test that was quick and free and more sensitive than early MRI. We all looked up YouTube videos on how to perform the exam, and we had to triple check how to interpret it after we got back to our desks. We dove into this too fast and too deep, before receiving structured training on this difficult exam that we thought was simple, and before learning exactly which kinds of patients it was appropriate for. We need to take a step back and be methodical, and what we really need is a large multi-center RCT on the diagnostic accuracy of the HINTS exam in the hands of emergency physicians.
5) Generalizability and Validity of Conclusions: The authors did not restrict their search to any particular language,

Nov 28, 2020 • 30min
SGEM#309: That’s All Joe Asks of You – Wear a Mask
Date: November 25th, 2020
Guest Skeptic: Dr. Joe Vipond has worked as an emergency physician for twenty years, currently at the Rockyview General Hospital. He is the President of the national charity Canadian Association of Physicians for the Environment (CAPE), as well as the co-founder and co-chair of the local non-profit the Calgary Climate Hub, and during COVID, the co-founder of Masks4Canada. Joe grew up in Calgary and continues to live there with his wife and two daughters.
Reference: Bundgaard et al. Effectiveness of Adding a Mask Recommendation to Other Public Health Measures to Prevent SARS-CoV-2 Infection in Danish Mask Wearers: A Randomized Controlled Trial. Annals of Internal Medicine 2020
Case: : Alberta is the last province in Canada that has yet to enact a mandatory mask policy. Should they do it?
Mask4All Debate
Background: During a respiratory pandemic, there still remains substantial questions about the utility and risk of facial masks for prevention of viral transmission. We debated universal mandatory masking back in the spring on an SGEM Xtra episode.
Some very well known evidence-based medicine experts like Dr. Trisha Greenhalgh were advocating in favour of stricter mask regulations based on the precautionary principle (Greenhalgh et al BMJ 2020). She was challenged on her position (Martin et al BMJ 2020) and responded with an article called: Laying straw men to rest (Greenhalgh JECP 2020).
A limitation of science is the available evidence. SARS-CoV-2 is a novel virus and we did not have much information specifically about the efficacy of masks. We needed to extrapolate from previous research on masks and other respiratory illnesses.
However, we do have a firm understanding of the germ theory of disease and masks have been used for over 100 years as an infectious disease strategy. It was surgeons in the late 1890’s that began wearing masks in the operating theaters. There was skepticism back then as to the efficacy of a “surgical costume” (bonnet and mouth covering) to prevent disease and illness during surgery (Strasser and Schlich Lancet 2020).
There was one recent cluster randomized control trial looking at surgical masks, cloth masks or a control group in healthcare workers (MacIntyre et al BMJ 2015). The main outcomes were clinical respiratory illness, influenza-like illness and laboratory-confirmed respiratory virus infection. All infectious outcomes were highest in the cloth mask group, lower in the control group and lowest in the medical mask group. As with all studies this one had limitations. One of the main ones is this looked at healthcare workers wearing a mask as protection not in the general public as a source control.
There has been a systematic review meta-analysis on physical distancing, face masks and eye protection to prevent SARS-Cov-2 (Chu et al Lancet 2020). With regards to masks, they found that face masks could result in a large reduction in risk of infection with a stronger association with N95 or similar respirators compared with disposable surgical masks or similar cloth masks.
SRMA also have limitations and one of the main ones is they are dependent on the quality of the included studies. This review in the Lancet included ten studies (n=2,647) with seven from China, eight looking at healthcare workers (not general public) and only one looking at COVID19. All 10 studies were observational designs and the authors correctly only claim associations. They also say their level of certainty about masks being associated with a decrease in disease is considered “low certainty” based on the GRADE category of evidence.
When considering an intervention, we cannot just consider the potential benefit, but we must also consider the potential harms. There is little or no evidence that wearing a face mask leads to potential harms. Yes, there are case reports of harm, children under 2 years of age should not wear face coverings (AAP News) and studies systematically under report adverse events (Hodkinson et al BMJ 2013) but the pre-test probability of individual harm is very low.
What many studies on masks conclude is we need better evidence to inform our decisions. Now we have the first published randomized control trial on mask wearing in public to prevent transmission of COVID19.
Clinical Question: Does recommending surgical mask use outside the home reduces wearers' risk for SARS-CoV-2 infection in a setting where masks were uncommon and not among recommended public health measures?
Reference: Bundgaard et al. Effectiveness of Adding a Mask Recommendation to Other Public Health Measures to Prevent SARS-CoV-2 Infection in Danish Mask Wearers: A Randomized Controlled Trial. Annals of Internal Medicine 2020
Population: Danish adults > 18 years of age without symptoms associated with SARS-CoV-2 (or previously tested positive for SARS-CoV-2), working out-of-home with exposure to other people for more than three hours per day and who do not normally wear a face mask at work
Exclusions: 18 years of age and younger, previously tested positive for SARS-CoV-2 or wears a face mask at work
Intervention: Participants were encouraged to follow the authorities general COVID-19 precautions and to wear a surgical face mask for a 30-day period when out-of-home (50 surgical masks were provided)
Comparison: Participants were encouraged to follow the authorities general COVID-19 precautions and no face masks were provided and no face mask recommendation
Outcome:
Primary Outcome: SARS-CoV-2 infection at one month by either antibody testing (IgG and/or IgM), polymerase chain reaction (PCR), or hospital diagnosis.
Secondary Outcome: PCR positivity for other respiratory viruses
Tertiary Outcomes: Returned swabs, Psychological aspects of face mask wearing in the community, Cost-effectiveness analyses on the use of surgical face masks, Preference for self-conducted home swab vs. healthcare conducted swab at hospital or similar, Symptoms of COVID-19, Self-assessed compliance with health authority guideline on hygiene, Willingness to wear face masks in the future, Health care diagnosed COVID-19 or SARS-CoV-2 (antibodies and/or PCR), mortality as with COVID-19 and all-cause mortality, Presence of bacteria; Mycoplasma pneumonia, Haemophilia influenza and Legionella pneumophila (to be obtained from registries when made available), Frequency of infected house-hold members between the two groups, Frequency of sick-leave between the two groups (to be obtained from registries when made available), and Predictors of primary outcome or its components
Authors’ Conclusions: “The recommendation to wear surgical masks to supplement other public health measures did not reduce the SARS-CoV-2 infection rate among wearers by more than 50% in a community with modest infection rates, some degree of social distancing, and uncommon general mask use. The data were compatible with lesser degrees of self-protection.”
Quality Checklist for Randomized Clinical Trials:
The study population included or focused on those in the emergency department. No
The patients were adequately randomized. Yes
The randomization process was concealed. Yes
The patients were analyzed in the groups to which they were randomized. Yes
The study patients were recruited consecutively (i.e. no selection bias). No
The patients in both groups were similar with respect to prognostic factors. Yes
All participants (patients, clinicians, outcome assessors) were unaware of group allocation. No
All groups were treated equally except for the intervention. Yes
Follow-up was complete (i.e. at least 80% for both groups). Yes
All patient-important outcomes were considered. Unsure
The treatment effect was large enough and precise enough to be clinically significant. Unsure
Key Results: The trial included 6,024 people with mean age of 47 years and almost 2/3 identified as female.
No statistical difference in SARS-CoV-2 infection between the mask group and no mask group
Primary Outcome: SARS-Cov-2 infection (Intension-to-Treat)
1.8% mask group vs. 2.1% no mask group
− 0.3 percentage point (95% CI, −1.2 to 0.4) P= 0.38
Odds Ratio (OR) 0.82 (95% CI, 0.54 to 1.23) P= 0.33
Per-Protocol Analysis: 1.8% mask group vs. 2.1% no mask group with absolute difference -0.4% (95% CI -1.2 to 0.5) P = 0.40 and OR 0.84 (95% CI, 0.55 to 1.26) P = 0.40
Secondary Outcomes: Other viral infection
0.5% mask group vs. 0.6% no mask group
There are a number of nerdy points we could have discussed but in typical fashion and to keep the blog/podcast to a digestible length we have highlighted five.
1) Methods: Some questions have been raised about the methodology. This trial was registered with ClinicalTrials.gov (NCT04337541). The trial protocol was registered with the Danish Data Protection Agency (P-2020-311), adhered to the recommendations for trials described in the SPIRIT Checklist and they published their methodology in the Danish Medical Journal (Bundgaard et al 2020).
Some of the comments about the methodology specifically referenced the lack of ethics approval. However, the researchers presented the protocol to the independent regional scientific ethics committee of the Capital Region of Denmark, which did not require ethics approval in accordance with Danish legislation. The trial was also done in accordance with the principles of the Declaration of Helsinki.
In the supplemental material there is a letter from the Chairman of the Ethics Committee saying they do not require ethics approval. It is hard to be critical of the researchers who took reasonable steps to address ethical concerns and were told they did not need ethics approval.

Nov 21, 2020 • 32min
SGEM#308: Taking Care of Patients Everyday with Physician Assistants and Nurse Practitioners
Date: November 19th, 2020
Guest Skeptic: Dr. Corey Heitz is an emergency physician in Roanoke, Virginia. He is also the CME editor for Academic Emergency Medicine.
Reference: Pines et al. The impact of advanced practice provider staffing on emergency department care: productivity, flow, safety, and experience. AEM November 2020.
Case: You are the medical director of a medium sized urban emergency department (ED). Volumes have increased over the past few years and you’re considering adding an extra shift or two. Your hospital has asked you to consider adding some advanced practice providers (APPs) instead of physician hours.
Background: Advanced practice providers (APPs) such as nurse practitioners (NPs) and Physician Assistants (PAs) are increasingly used to cover staffing needs in US emergency departments. This is in part driven by economics, as APPs are paid less per hour than physicians.
The calculation works if APP productivities are similar enough to physicians to offset differentials in billing rates. However, little data exists comparing productivity, safety, flow, or patient experiences in emergency medicine.
The American Academy of Emergency Medicine (AAEM) has a position statement on what they refer to as non-physician practitioners that was recently updated. The American College of Emergency Physicians (ACEP) has a number of documents discussing APPs in the ED.
There has been a concern about post-graduate training of NPs and PAs in the ED. A joint statement on the issue was published in September this year by AAEM/RSA, ACEP, ACOEP/RSO, CORD, EMRA, and SAEM/RAMS.
Clinical Question: How does the productivity of advanced practice providers compare to emergency physicians and what is its impact on emergency department operations?
Reference: Pines et al. The impact of advanced practice provider staffing on emergency department care: productivity, flow, safety, and experience. AEM November 2020.
Population: National emergency medicine group in the USA that included 94 EDs in 19 states
Exposure: Proportion of total clinician hours staffed by APPs in a 24-hour period at a given ED
Comparison: Emergency physician staffing
Outcome:
Primary Outcome: Productivity measures (patients per hour, RVUs/hour, RVUs/visit, RVUs per relative salary for an hour)
Safety Outcomes: Proportion of 72-hour returns and proportion of 72-hour returns resulting in admission
Other Outcomes: ED flow by length of stay (LOS), left without completion of treatment (LWOT)
Dr. Jesse Pines
This is an SGEMHOP episode which means we have the lead author on the show. Dr. Jesse Pines is the National Director for Clinical Innovation at US Acute Care Solutions and a Professor of Emergency Medicine at Drexel University. In this role, he focuses on developing and implementing new care models including telemedicine, alternative payment models, and also leads the USACS opioid programs.
Authors’ Conclusions: “In this group, APPs treated less complex visits and half as many patients/hour compared to physicians. Higher APP coverage allowed physicians to treat higher-acuity cases. We found no economies of scale for APP coverage, suggesting that increasing APP staffing may not lower staffing costs. However, there were also no adverse observed effects of APP coverage on ED flow, clinical safety, or patient experience, suggesting little risk of increased APP coverage on clinical care delivery.
Quality Checklist for Observational Study:
Did the study address a clearly focused issue? Yes
Did the authors use an appropriate method to answer their question? Unsure
Was the cohort recruited in an acceptable way? Yes
Was the exposure accurately measured to minimize bias? Yes
Was the outcome accurately measured to minimize bias? Yes
Have the authors identified all-important confounding factors? Unsure
Was the follow up of subjects complete enough? Yes
How precise are the results? Fairly precise
Do you believe the results? Yes
Can the results be applied to the local population? Unsure
Do the results of this study fit with other available evidence? Unsure
Key Results: Over five years there were more than 13 million ED visits at these 94 sites. The majority (75%) of visits were treated by physicians independently. PAs treated 18.6%, NPs 5.4% and 1.4% were treated by both a physician and an APP.
Physicians were more productive than physician assistants and nurse practitioners.
Effect of 10% increase in APP coverage:
Patients/hour: -0.12 (95% CI; -0.15 to -0.10)
RVUs/hour: -0.4 (95% CI; -0.5 to -0.3)
Safety and Outcome: No significant effect on length of stay, left without treatment, and 72-hour returns
Listen to the podcast on iTunes to hear Jesse’s responses to our five nerdy questions.
1) Surprise: These results surprise me somewhat due to personal experience where APPs see lower acuity patients, often in a “fast-track” area. I don’t know our facility data, but would be surprised if the APPs had significantly lower overall patients/hour than the doctors.
2) Physician Satisfaction: You looked at the productivity and safety as an outcome. What about physician satisfaction? I know some doctors who can’t function well without an APP and other doctors who prefer working without an APP.
3) Not All Equal: You mention that when making the schedules, one physician hour was equal to two APP hours. For your analysis, it was unclear to me if you calculated your numbers using 1:1 physician to APP hours, or if you kept the 1:2 ratio.
4) Patient Satisfaction: You had an exploratory outcome using a Press-Ganey (PG) percentile rank as a measure of patient experience. Those outside of the USA may not be familiar with the Press-Ganey patient satisfaction survey. Can you explain this metric and what did you find in your study about patient satisfaction?
5) External Validity: This was a large study with 19 states, 94 sites and 13 million ED visits. However, it represents one large national ED group. Do you think the results would apply to small groups, democratic physician-led groups, or rural sites?
Comment on Authors’ Conclusion Compared to SGEM Conclusion: We agree with the authors’ conclusions
SGEM Bottom Line: Increasing advanced practice provider coverage has minimal effect on emergency department productivity, flow and safety outcomes.
Case Resolution: You continue the discussion with hospital administration, understanding that APP hours need to be added in such a way as to utilize their skillsets best, but not as a full replacement for physician hours. You suggest considering a higher number of APP hours to replace one physician hour.
Dr. Corey Heitz
Clinical Application: APPs can be utilized to “offload” lower acuity cases, while allowing physicians to care for higher acuity patients. Physicians overall had higher levels of productivity, both as measured by patients/hour and RVUs/hour.
What Do I Tell My Patient? Not applicable
Keener Kontest: Last weeks’ winner was Dr. Daniel Walter. He is an Emergency Medicine & Critical Care registrar working in the UK. Dan knew the LAST thing you want to see happen after injecting someone with lidocaine is Local Anesthetic Systemic Toxicity.
Listen to the SGEM podcast to hear this weeks’ question. Send your answer to TheSGEM@gmail.com with “keener” in the subject line. The first correct answer will receive a cool skeptical prize.
SGEMHOP: Now it is your turn SGEMers, what do you think of this episode on APPs in the ED? Tweet your comments using #SGEMHOP. What questions do you have for Jesse and his team? Ask them on the SGEM blog. The best social media feedback will be published in AEM.
Also, don’t forget those of you who are subscribers to Academic Emergency Medicine can head over to the AEM home page to get CME credit for this podcast and article. We will put the process on the SGEM blog:
Go to the Wiley Health Learning website
Register and create a log in
Search for Academic Emergency Medicine – “November”
Complete the five questions and submit your answers
Please email Corey (coreyheitzmd@gmail.com) with any questions or difficulties.
Remember to be skeptical of anything you learn, even if you heard it on the Skeptics’ Guide to Emergency Medicine.
.

Oct 31, 2020 • 20min
SGEM#307: Buff up the lido for the local anesthetic
Date: October 29th, 2020
Guest Skeptic: Martha Roberts is a critical and emergency care, triple-certified nurse practitioner currently living and working in Sacramento, California. She is the host of EM Bootcamp in Las Vegas, as well as a usual speaker and faculty member for The Center for Continuing Medical Education (CCME). She writes a blog called The Procedural Pause for Emergency Medicine News and is the lead content editor and director for the video series soon to be included in Roberts & Hedges' Clinical Procedures in Emergency Medicine.
Reference: Vent et al. Buffered lidocaine 1%, epinephrine 1:100,000 with sodium bicarbonate (hydrogencarbonate) in a 3:1 ratio is less painful than a 9:1 ratio: A double-blind, randomized, placebo-controlled, crossover trial. JAAD (2020)
Case: A 35-year-old female arrives to the emergency department with a 3 cm laceration to the palmar surface of her left forearm sustained by a clean kitchen knife while emptying the dishwasher. The patient reports a fear of needles and has concerns about locally anaesthetizing the area because, “I got stitches on my arm once before and that shot burned like crazy”! The patient asks the practitioner if there is any chance, she can get a shot that “burns less” than her last one.
Background: We have covered wound care a number of times on the SGEM. This has included some myth busing way back in SGEM#9 called Who Let the Dogs Out.
That episode busted five myths about simple wound care in the Emergency Department:
Patients Priorities: Infection is not usually the #1 priority for patients. For non-facial wounds it is function and for facial wounds it is cosmetic. This is in contrast to the clinicians’ #1 priority that is usually infection.
Dilution Solution: You do not need some fancy solution (sterile water, normal saline, etc) to clean a wound. Tap water is usually fine.
Sterile Gloves: You do not need sterile gloves for simple wound treatment. Non-sterile gloves are fine. Save the sterile gloves for sterile procedures (ex. lumbar punctures).
Epinephrine in Local Anesthetics: This will not make the tip of things fall off (nose, fingers, toes, etc). Epinephrine containing local anesthetics can be used without the fear of an appendage falling off.
All Simple Lacerations Need Sutures: Simple hand lacerations less than 2cm don’t need sutures. Glue can be used in many other areas including criss-crossing hair for scalp lacerations.
Other SGEM episodes on wound care include:
SGEM#63: Goldfinger (More Dogma of Wound Care)
This episode looked at how long do you have to close a wound. The bottom line was that there is no good evidence to show that there is an association between infection and time from injury to repair.
SGEM#156: Working at the Abscess Wash
The question from that episode was: does irrigation of a cutaneous abscess after incision and drainage reduce the need for further intervention? Answer: Irrigation of a cutaneous abscess after an initial incision and drainage is probably not necessary.
SGEM#164: Cuts Like a Knife – But you Might Also Need Antibiotics for Uncomplicated Skin Abscesses.
SGEM Bottom Line: The addition of TMP/SMX to the treatment of uncomplicated cutaneous abscesses represents an opportunity for shared decision-making.
The issue of buffering lidocaine was covered on SGEM #13. This episode briefly reviewed a Cochrane SRMA that looked at buffering 9ml of 1% or 2% lidocaine with 1ml of 8.4% sodium bicarbonate (Cepeda et al 2010).
The SRMA of buffering lidocaine contained 23 studies with 8 of the 23 studies having moderate to high risk of bias. The SGEM bottom line was that patients might appreciate the extra effort of buffering the lidocaine.
Interestingly, this Cochrane Review was withdrawn from publication in 2015. The reason provided was that the review was no longer compliant with the Cochrane Commercial Sponsorship Policy. The non-conflicted authors have decided not to update the review.
Clinical Question: Does buffering lidocaine with sodium bicarbonate make local anesthetic less painful?
Reference: Vent et al. Buffered lidocaine 1%, epinephrine 1:100’000 with sodium bicarbonate (hydrogencarbonate) in a 3:1 ratio is less painful than a 9:1 ratio: A double-blind, randomized, placebo-controlled, crossover trial. JAAD (2020)
Population: Healthy volunteers age 18-75 years of age
Exclusions: Hypersensitivity or allergies to local anesthetics of the amide type or to auxiliary substances such as sulfites, pregnant, damaged skin on the arms, or inability to give informed consent.
Intervention: IMP (investigational medicinal products) were injected 5cm distal from the cubital fossa
IMP1: 1% lidocaine with epinephrine plus sodium bicarbonate in a 3:1 mixing ratio
IMP2: 1% lidocaine with epinephrine plus sodium bicarbonate in a 9:1 mixing ratio
IMP3: 1% lidocaine with epinephrine
Comparison: Placebo of 0.9% sodium chloride (IMP4)
Outcomes:
Primary Outcome: Pain during infiltration on a numerical rating scale (0-10, with 0=no pain and 10=unacceptable pain)
Secondary Outcomes: Patient comfort during infiltration (four categorical terms: desirable, acceptable, less acceptable or unacceptable) and duration of local anesthesia (30-minute intervals up to 3 hours) using a standardized laser stimulus (numbness yes or no?)
Authors’ Conclusions: “Lido/Epi-NaHCO3 mixtures effectively reduce burning pain during infiltration. The 3:1 mixing ratio is significantly less painful than the 9:1 ratio. Reported findings are of high practical relevance given the extensive use of local anesthesia today.”
Quality Checklist for Randomized Clinical Trials:
The study population included or focused on those in the emergency department. No
The patients were adequately randomized. Yes
The randomization process was concealed. Yes
The patients were analyzed in the groups to which they were randomized. Yes
The study patients were recruited consecutively (i.e. no selection bias). Unsure
The patients in both groups were similar with respect to prognostic factors. Unsure
All participants (patients, clinicians, outcome assessors) were unaware of group allocation. Unsure
All groups were treated equally except for the intervention. Yes
Follow-up was complete (i.e. at least 80% for both groups). Yes
All patient-important outcomes were considered. Yes
The treatment effect was large enough and precise enough to be clinically significant. Unsure
Key Results: They enrolled 48 healthy volunteers, 21 males and 27 females aged 21-62 with a mean age of 31 years.
Buffering lidocaine made injections less painful
Primary Outcome: Pain during infiltration
IMP1 (3:1 mixture) was less painful than IMP2 (9:1 mixture)
IMP3 (unbuffered) was more painful than IMP1 or IMP2
IMP4 (placebo) was more painful than IMP1-3
Secondary Outcomes:
Patient Comfort Discomfort During Infiltration: IMP1 (3:1 mixture) had the least reported discomfort and IMP4 (placebo group) reported the most discomfort.
Duration of Local Anesthetic: Laser-induced pain was absent in the injection areas for IMP1-3 (intervention groups) between 5 minutes and 3 hours after infiltration but not for IMP4 (placebo)
1) External Validity: These healthy volunteers with a mean age of 31 years may not represent the patients we see for simple wound repairs in the emergency department. We do not know any details about the volunteers except their age and self-identified gender. The study was also conducted in Germany. Cultural and social factors can play a role in the perception of acute and chronic pain (Peacock and Patel 2018, MM Free 2002 and MM Free 2012).
2) Blinding: Local anesthetic hurts. If the patients were aware of the hypothesis (buffering lidocaine to minimize pain), this could have biased the subjective self-reporting for the primary outcome to have a larger effect size.
3) Sample Size: This was a relatively small study with only 48 volunteers. Are the results large enough (3 points on an NRS value) and precise enough (no 95% CI were provided for the point estimates) to be clinically relevant?
4) Shelf-Life: We stock large bottles of sodium bicarbonate and would usually only require a small amount to buffer the amount of lidocaine needed to treat a single patient. This could lead to a great deal of waste. Sodium bicarbonate is not expensive but take a small number and multiply it by a big number (number of simple wound repairs done per day) can end up being a large number. One way around that would be to mix-up a larger amount at the start of a shift. However, the stability of the buffered lidocaine-sodium bicarbonate solution is limited. It would be great if a stable commercial product was available in the <10ml solutions we typically require.
5) Alternatives: There are other methods that can be used to minimize the pain of local anesthetic injection. Those includes but are not limited to L.E.T. (lidocaine, epinephrine and tetracaine) used topically.
Comment on Authors’ Conclusion Compared to SGEM Conclusion: We agree that buffering lidocaine with sodium bicarbonate decreases pain during infiltration and that a 3:1 mixture is better than a 9:1 mixture. We are not as sure of the “high” practical relevance due to the issues mentioned in nerdy point #4.
SGEM Bottom Line: Consider buffering your lidocaine with a 3:1 sodium bicarbonate mixture to decrease the discomfort of local anesthetic infiltration.
Case Resolution: You inform her that there is a way to make the local injection burn less. You mix up your 1% lidocaine in a 3:1 mixture with sodium bicarbonate. She leaves very happy with post-suture instructions.
Clinical Application: If the patient expresses fears about the anesthetic injection,


