Quantcast
Channel: PulmCrit: Pulmonary Intensivist's Blog
Viewing all 104 articles
Browse latest View live

Sleep-protective monitoring to reduce ICU delirium

$
0
0

Introduction

Recently an excellent post on the Trauma Professional's Blog pointed out that nocturnal vital signs disrupt sleep and may be unnecessary in stable patients (e.g. patients recovering from minor orthopedic surgery).  I couldn't agree more.  Allowing restorative sleep is one of the best approaches to prevention of delirium.

What about patients in the ICU?  Critically ill patients certainly require monitoring, but are also at increased risk of delirium.  How can we monitor patients safely without (literally) driving them crazy?

Sleep-protective vs. sleep-disruptive vital signs

In the ICU, we have the luxury of having patients attached to a variety of continuous monitors which can unobtrusively obtain information.  Provided that the alarms are set appropriately, this allows for nondisruptive patient monitoring.  For example, pulse oximetry and respiratory rate can easily be obtained in a sleeping patient, providing useful information about oxygenation and respiratory efforts.


The only two vital signs which often interfere with sleep are temperature and blood pressure monitoring.  Avoiding temperature measurement when the patient is asleep is probably fine for most ICU patients (with the exception of patients with neurologic injury, in whom fever may be more problematic).  What about blood pressure?

Nondisruptive hemodynamic monitoring

Blood pressure is certainly an important vital sign.  However, it's not the only approach to hemodynamic monitoring.  In particular, the presence of good urine output is reassuring evidence of adequate end-organ perfusion. 


Above is one possible approach to sleep-protective hemodynamic monitoring.  This may be considered in patients who are not at high risk for development of shock and don't have active cardiac problems (e.g., a patient admitted for COPD exacerbation).  If efforts are made to obtain blood pressure and temperature measurements when the patient is awakened for other reasons (e.g. phlebotomy, repositioning), then this would probably result in a fair amount of blood pressure and temperature monitoring as well.  

Patients in whom nocturnal stimulation is especially problematic

The risk of occult hemodynamic deterioration must be weighed against the risk of stimulating patients with vital sign monitoring.  For example, patients who have already developed delirium are at greater risk of persistent or worsening delirium due to sleep deprivation.  Patients with asthma or COPD exacerbation and a significant component of anxiety should be allowed uninterrupted sleep if at all possible, because arousal and anxiety may fuel their dyspnea in a vicious cycle (described previously here).

Greater focus on continuous monitoring may be useful


Current technologies allow for continuous monitoring of heart rate and respiratory rate using a single set of three EKG leads.  Close attention to trends in continuously acquired information may detect instability earlier than intermittent vital sign monitoring.  In particular, worsening tachypnea and tachycardia often precede overt clinical deterioration, so focusing on trends in these parameters may be especially useful (Cretikos 2008).


  • Providing adequate sleep and maintaining normal circadian cycles are important to prevent and manage delirium in the ICU.
  • The hemodynamic and respiratory status of ICU patients can often be assessed without interrupting sleep using respiratory rate, pulse oximetry, heart rate, urine output (if catheterized), and ventilator parameters (if intubated).
  • In patients at low risk of hemodynamic decompensation, blood pressure monitoring may be suspended during sleep if there are other signs available for hemodynamic monitoring (e.g. heart rate and urine output).   The ideal monitoring strategy may be determined on a patient-by-patient basis, weighing the risk of hemodynamic deterioration vs. the harm of sleep deprivation.   



Image Credits: Monitor image from http://www.mc.vanderbilt.edu/documents/7north/files/MP5%20Rev_%20G%20Training%20Guide.pdf

Demystifying the p-value

$
0
0
Introduction

The limitations of p-values for null hypothesis testing has been debated since their invention in the 1920s.  Unfortunately, statistics textbooks typically whitewash this controversy, presenting null hypothesis testing as the only viable approach to statistics.  Recently, the journal of Basic and Applied Social Psychology took this debate a step further, officially banning the use of p-values in any manuscript.  This is an eye-opening move, which invites serious re-evaluation of the p-value. 

This post starts by exploring five major problems with the p-value.  It will then discuss six ways that we can try to interpret p-values in a meaningful way.

Five problems with the p-value

[Problem #1]  P-values attempt to exclude the null hypothesis without actually showing that the alternative is much better.

The p-value attempts to prove an experimental hypothesis by disproving the alternative (the "null hypothesis") as shown below.  The experimental hypothesis is thus proven by a process of exclusion:


Unfortunately, this is fundamentally flawed.  Statistics cannot absolutely exclude the null hypothesis (p=0), but rather it may merely show that the observed data would be unlikely to occur (p<0.05) if the null hypothesis were true (2).  It is subsequently assumed that the observed data would be much more likely if the experimental hypothesis was true.


However, there is no guarantee that the experimental hypothesis fits the data much better than the null hypothesis.  Maybe the data is just really wacky.  Perhaps the data doesn't fit any hypothesis very well.  By only comparing the data to one of these possibilities (the null hypothesis) the standard approach to null-hypothesis testing only evaluates one side of the balance:


[Problem #2]  The P-value ignores pre-test probability

Let's imagine that a homebound elderly woman is admitted in Vermont USA for constipation.  By accident, a serologic test for Ebola is ordered and it comes back as positive.  The test has a specificity of 99.9%.  However, the patient has no signs of Ebola nor any possible contact with Ebola.  Any sensible clinician would realize that this is a false-positive test.  However, technically this is a highly "statistically significant" result.  Assuming the null hypothesis (that the woman doesn't have Ebola), this result would be expected 0.1% of the time (p=0.001).  Based on the p-value, the woman must have Ebola!

This scenario highlights how the p-value ignores pre-test probability.  If the hypothesis is highly unlikely to begin with, even a strongly positive statistical test may not render the hypothesis probable.  Alternatively, if a hypothesis is very likely to begin with, then even a weakly positive statistical test may render it probable.  Any statistical test is meaningless without considering the pre-test probability (Browner 1987). 

[Problem #3]  P-values actually tell us the reverse of what we want to know

The p-value tells us the likelihood of observing the data, assuming that the null hypothesis is correct.  This is actually the reverse of what we want to know: What is the likelihood of the null hypothesis given the observed data? 

For example, in the above situation, the p-value tells us the likelihood that an Ebola serology will be positive, assuming that the patient doesn't have Ebola (p=0.001).  This evadesthe question that we are truly interested in: What is the likelihood that the patient has Ebola, given that she has a positive Ebola serology? 

Although these reversed conditional probabilities may sound deceptively similar (the probability of Agiven B versus the probability of B given A), they are entirely different.  For example, to get from one conditional probability to the other, Bayes Theorem is required (neon sign below).  Failing to recognize this difference leads the widely held misconception that the p-value is equal to the probability that the null hypothesis is true.   


[Problem #4]  P-values are not reproducible

One of the bedrock principles of science is that any meaningful result must be reproducible.  Anything which is not reproducible is not scientific. 

We've all probably experienced the phenomenon where adding or subtracting a few data points will move the p-value across the p=0.05 goal-post.  What is even more disquieting is that when the entire experiment is repeated, the p-value varies much more (Halsey 2015).  As the confidence interval slides around with repetition of the experiment, p-values rise and fall exponentiallybased on how closely the confidence interval lands to zero (illustrated in the video below).  The p-value seems less like a sober and reproducible feature of science, and more like a random gamble.


[Problem #5]  The P-value is generally used in a dogmatic and arbitrary fashion

The use of the p-value has grown into something arbitrary and nonsensical.  If p=0.051, then the result is "insignificant", a mere "trend" in the data which is easily dismissed.  However, if one data point is added causing a drop to p=0.049 then the result is suddenly, magically significant.  It was not meant to be this way.  When the p-value was designed in the 1920s, it was intended as a flexible tool to determine whether an experiment was worth repeating and investigating further.  It was never conceived to represent absolute truth. 

Six ways to avoid being misled by P-values

It's easier to be critical than to be productive.  Critiquing the p-value is the easy part.  The five problems listed above are not close to being exhaustive (for example, one article listed a dozen problems with the p-value; Goodman 2008). 

The real challenge is determining how to move forward given this knowledge.  Bayesian statistics are emerging as a viable alternative to the p-value (more on this below), but for now p-values are everywhere.  What approaches can we use to interpret p-values without being misled? 

[Solution #1]  Re-scale your interpretation of the p-value

The p-value evaluates the null hypothesis in a vacuum.  Perhaps the null hypothesis doesn't fit the data well, but how much better does the experimental hypothesis fit the data?  This question is answered using Bayesian statistical methods.  The key to this analysis is the Bayes Factor, which equals the ratio of these two probabilities (figure below).  The Bayes Factor also equals the likelihood ratiorelating the pre-test and post-test odds of the experimental hypothesis being true.  Neat.


Johnson 2013 evaluated a variety of standard statistical tests, correlating the p-value with the Bayes Factor: 


Therefore if p=0.05, the odds of the experimental hypothesis being valid increase by a factor of roughly 3-5 (e.g., if the pre-test probability was 50%, the post-test probability will increase to 75%-83%)(3).  Thus p=0.05 reflects a moderatestrength of evidence, not definitive proof as is commonly believed.  Other investigators have obtained similar results using different Bayesian techniques (Goodman 2001). 

These correlations are rough approximations.  Ideally, the Bayes Factor would be calculated directly from the data in each study (Jakobsen 2014).  However, in the absence of such calculations, these correlations may help understand the meaning of various p-values.   

[Solution #2]  Consider the pre-test probability.


The post-experiment odds that the experimental hypothesis is true may be calculated using the Bayes Factor as a likelihood ratio as shown above (3).  As in clinical testing, a statistical test alone is meaningless without taking into account the pre-test probability.  This equation allows for a seamless combination of pre-test probability with the experimental data.  Notably, the final result is equally dependent on both of these factors.

Unfortunately, it pre-test probability is often unclear.  The appropriate pre-test probability for clinical trials has been debated previously with no clear answer.  The principle of indifference suggests that  in a state of ignorance, the pre-test probability should be given a 50% likelihood.  However, in the history of medicine, most therapies which were investigated have proven to be ineffective.  Therefore, utilizing a pre-test probability of 50% may be too generous in most cases.  Ideally the pre-test probability may take into account the prior evidence basis supporting the hypothesis (i.e. basic science, animal data, prior clinical studies) and the success rate of similar hypotheses. 

Estimating pre-test probability might seem to add an element of subjectivity which threatens the "objective" results of statistical testing.  However, failing to consider pre-test probability is even more dangerous, because this implicitly confers 50% pre-test probability to every hypothesis (1).  One advantage of a Bayesian approach is that by providing the Bayes Factor, it allows the reader to calculate the post-test probability based on their own pre-test probability and draw their own conclusions.

Ultimately this provides us with a disappointing realization:  It is generally impossible to determine the probability that the experimental hypothesis is correct.  This probability depends on the pre-test probability, which is usually unknown.  Thus, the final probability of the experimental hypothesis being valid is a known unknown.  Statistical tests help point us in the right direction, but they cannot definitively reveal the truth.


[Solution #3]  Always bear in mind that the p-value does not equal α (type-I error)

Type-1 error (α) is the risk of incorrectly discarding the null hypothesis, and thereby incorrectly accepting the experimental hypothesis.  One very common misconception is that the p-value equates with α (i.e., if p<0.05 then α<0.05).  This misconception is due to equating conditional probabilities (discussed above in Problem #3).  In practice, p is often lower than α.  For example, some authors suggest that hypotheses which are "significant" near the p=0.05 level have a >20% likelihood of being wrong (α>0.2; Goodman 2001, Johnson 2013).

[Solution #4]  Consider modifying the acceptable Type-I error (α) based on clinical context

Conventionally, the acceptable level of type-I error (α) is set to the magical value of α<5%.  However, this doesn't always make clinical sense.  Consider two imaginary hypotheses:

Hypothesis #1: New treatment for septic shock using early goal-directed intra-cranial pressure monitoring reduces mortality (α=0.04)

Hypothesis #2: Vitamin C supplementation improves healing of pressure ulcers (α=0.1)

Placing intra-cranial pressure monitors is invasive.  Therefore, although Hypothesis #1 does indeed have α<0.05, I would be unwilling to broadly implement this before replicating it with another study.  Alternatively, vitamin C supplementation is very safe, so I would be willing to prescribe this therapy despite a lower level of certainty (α=0.1).


Ultimately as clinicians, we must weigh the relativelikelihood of harm vs. benefit as well as the relativeamount of harm vs. benefit.  The statistical tests described in this post pertain primarily to the likelihood that the therapy is beneficial (Type-I error, α).  However, when weighing a clinical decision this is only one of four important pieces of information (figure above).  Depending on the clinical context, different levels of Type-I error may be clinically acceptable.

[Solution #5]  Evaluate the P-value in the context of other statistical information


When examining a study, the entirety of the data should be considered rather than focusing only on the p-value.  In particular, effect size, confidence intervals, sample size, and power may be important.  For example, consider the two results shown below regarding benefit from an experimental therapy.  Although both have the same p-value, their interpretation is quite different.  The results on the right may suggest that the therapy is ineffective, whereas the results on the left may suggest that the study was underpowered and additional evidence is needed to clarify the true effect. 


[Solution #6]  Don't expect statistics to be a truth machine

We live in a fast-paced society, deluged by information.  We want quick answers.  Is the study positive or negative?  Is the drug good or bad?  Quick, what is the bottom line?  The arbitrary dividing line of p=0.05 is a quick, but extremelydirty approach to this.  This concept of a single cutoff value yielding a binary result (significant vs. insignificant) is based on the misconception that statistical tests are some sort of "truth machine" which must yield a definitive result.


In reality, statistical tests never tell us with 100% certainty whether a hypothesis is true.  As discussed above, statistical tests cannot even tell us the absolute probabilityof the hypothesis being true.  Statistical tests can only provide us with likelihood ratios which may increase or decrease our belief in the hypothesis.  It is our job to interpret these likelihood ratios, which often requires a lot of work.  Rarely statistical tests may yield dramatic results, but more often they result in shades of grey.  We must be willing to accept these shades of grey, and work with them.  We must have patience to perform more experiments and invest more thought, before reaching a conclusion. 

Conclusions

P-values are deeply entrenched in the medical literature.  Based initially on a suggestion by Fisher in 1920s, hypotheses with p<0.05 are accepted whereas hypotheses with p>0.05 are rejected.  It is widely believed that the p-value measures the likelihood of the null hypothesis and the reproducibility of the experiment.  Unfortunately, none of these beliefs are true. 

The harsh reality is that our statistical tests aren't nearly as definitive as is commonly thought.  A p-value of 0.05 may actually correlate with a likelihood ratio of 3-5 that the hypothesis is correct, which constitutes only moderately strong evidence.  P-values are notoriously variable, providing no information about the reproducibilityof the result.  Furthermore, the final probability that the hypothesis is correct is strongly dependent on the pre-test probability, which is often ignored.

Change is difficult, particularly regarding something as pervasive as the p-value.  Demanding more statistically rigorous results may be impossible for investigators, particularly in critical care studies where recruitment is difficult.  Ultimately we may have to accept that studies aren't the statistically all-powerful truth machines which we have believed them to be.  In the face of weaker statistical evidence, we may need to proceed more cautiously and with greater emphasis on pre-test probability (e.g. integration with prior evidence), statistical context (e.g. effect size and power), and alpha-levels adjusted based on clinical context.  The truth machine is broken: welcome to the grey zone.



  • P-values over-estimate the strength of evidence.  Research using Bayesian Statistics suggests that p=0.05 corresponds to a positive likelihood ratio of only 3-5 that the experimental hypothesis is correct.
  • P-values are very poorly reproducible.  Repeating an experiment will often yield a dramatically different p-value.
  • Any approach to hypothesis testing should take into account the pre-test probability that the hypothesis is valid.  Just like a laboratory test, a statistical test is meaningless without clinical context and pre-test probability.
  • Avoid blindly using conventional cutoff values (e.g., p<0.05 and α<0.05) to make binary decisions about the hypothesis (e.g., significant vs. nonsignificant).  Life just isn't that simple. 

References of particular interest
  • Goodman SN.  Toward evidence-based medical statistics Part 1: The P-value fallacy.  Ann Intern Med 1999; 130: 995-1004, as well as adjacent article Part 2: The Bayes Factor1005-1013. 
  • Goodman SN.  A dirty dozen: Twelve p-value misconceptions.  Semin Hematol 2008; 45: 135-140.
  • Johnson VE.  Revised standards for statistical evidence.  Proceedings of the National Academy of Science, 2013; 110 (48) 19313-19317.
  • Halsey LG et al.  The fickle P value generates irreproducible results.  Nature Methods 2015; 12(3) 179-185.
Notes

(1) Standard null-hypothesis testing using the p-value does not explicitly assign any pre-test probability to the null hypothesis or the experimental hypothesis.  Supporters of p-values would argue that this is an advantage of null-hypothesis testing, allowing the procedure to avoid the slippery issue of pre-test probability.  However, the entire procedure of null-hypothesis testing entirely ignores the pre-test probability and applies similar rigor for testing every hypothesis.  By ignoring the pre-test probability, this procedure indirectly implies that it is unimportant (i.e., doesn't significantly differ from 50%). 

(2) The p-value is actually the likelihood of obtaining the observed result or any more extreme result based on the null hypothesis.  This nuance is left out of the body of the text merely for the sake of brevity.  The distinction may be a real issue, however, because the p-value is not a measurement of the data itself but actually a measurement of more extreme data.  Since the precise distribution and nature of this extreme data is generally not known (but rather inferred), this can lead to incorrect results. 

(3) Unfortunately, likelihood ratios and Bayes Factors are defined in terms of odds, but in general it's easier to think about things in terms of probabilities.   Odds and probabilities can be easily converted to one another, although this gets tiresome.   The fastest way to convert a pre-test probability into a post-test probability using the Bayes Factor (or a Likelihood Ratio) is via an online statistical calculator.






Apneic oxygenation and high-flow nasal cannula don’t prevent desaturation during intubation?

$
0
0



Introduction

Recently there has been increased interest in the use of high-flow nasal cannula (HFNC) to provide preoxygenation and apneic oxygenation during endotracheal intubation.  Previous posts have discussed the basic physiology and some evidence behind this.  Vourc'h et al. just published a RCT showing no benefit from HFNC in this situation (1).  What should we make of this new data? 

Overview of Vourc'h et al.


The study design is shown above.  124 hypoxemic patients requiring intubation were randomized to either a HFNC group (who received 60 liter/min flow of 100% oxygen throughout the procedure) or a control group (who received preoxygenation using a face mask at 15 liters/minute flow and no apneic oxygenation).  The primary endpoint was the lowest saturation during the intubation procedure.  There was no significant difference in this outcome, with a trend towards improved saturation in the HFNC group (figure below).  This was a very sick group of patients, who experienced a substantial rate of severe desaturation.


Consideration of these results in context of physiology

These results make little physiologic sense.  The control group was preoxygenated with a facemask at 15 liters/minute flow (which typically achieves an inhaled FiO2 of 60-70%; Weingart and Levitan 2011) and received no apneic oxygenation.  In contrast, the HFNC group was preoxygenated with 100% oxygen at 60 liters/minute flow (which provides >90% inhaled FiO2 as reviewed here) and received ample amounts of apneic oxygenation.

In order to believe the results of this study, one would have to question both the utility of HFNC and also the utility of apneic oxygenation.  Based on physiology and prior evidence supporting apneic oxygenation, there really should have been no contest between these two therapeutic arms.

Why didn't they observe a difference?

Desaturation during endotracheal intubation depends on a number of factors, including the quality of preoxygenation and apneic oxygenation, procedure duration, severity of underlying lung disease, and lung collapse during the procedure.  For example, an edentulous patient who can be intubated in 15 seconds may not desaturate despite poor preoxygenation.  Alternatively, a patient with severe ARDS and morbid obesity may desaturate despite ideal preoxygenation and apneic oxygenation. 

One challenge of critical care studies is that patients are very heterogeneous.  Although higher quality preoxygenation may make a difference, this difference will be very hard to detect in the setting of widely varying patients with differing illness severity and airway anatomy.  It is likely that any signal from different types of preoxygenation was lost in this "noise" induced by heterogeneity.  The authors of this study did concede that “the timing of invasive mechanical ventilation probably governs the depth of desaturation during [intubation] more than the preoxygenation device.” 


Conclusions

This study failed to detect a benefit from HFNC as well as apneic oxygenation, most likely due to a relatively low sample size combined with a high degree of patient heterogeneity.  It is extremely difficult to believe that neitherHFNC nor apneic oxygenation work at all. 

This study does emphasize an important point, which is that providing 100% FiO2 cannot prevent desaturation due to atelectasis and derecruitment of the lungs (which causes a physiologic shunting of blood through collapsed lung areas).  Patients at higher risk of lung collapse include patients with morbid obesity and patients with parenchymal lung disease (e.g. ARDS).  Such patients may be most safely preoxygenated using noninvasive ventilation in order to provide both high levels of FiO2 and positive pressure to recruit the lungs. 

 
  • Vourc'h et al. found that neither HFNC nor apneic oxygenation were effective for reducing desaturation during intubation.  This is probably due to a high level of heterogeneity among patients, drowning any potential signal in noise. 
  • This study should notbe used as evidence to abandon HFNC and apneic oxygenation to reduce peri-intubation desaturation.  In particular there is an extensive body of evidence supporting the efficacy of apneic oxygenation (e.g., Weingart and Levitan 2011).  The precise role of HFNC remains unclear.   
  • Providing 100% FiO2 cannot prevent desaturation due to lung collapse.  For patients at high risk of lung collapse (e.g. ARDS, morbid obesity), noninvasive ventilation should be considered since it provides both positive pressure and high FiO2.

Notes

(1)  Vourc'h M et al.  High-flow nasal cannula oxygen during endotracheal intubation in hypoxemic patients: A randomized controlled trial.  Intensive Care Med, April 2015.

Cognitive approach to shock diagnosis using ultrasonography

$
0
0

Recently I coauthored an articleabout the bedside evaluation of shock using ultrasonography.  It's a reasonable article, albeit conventional.  Below is a summary of the key points.  

Many textbooks recommend line-box algorithms for approaching a patient with shock, for example the ACES algorithm below.  These algorithms allow the operator to reach a diagnosis based on 2-3 decision nodes, without taking other information into account.


Although line-box algorithms are efficient, they fail when approaching complicated patients with multifactorial shock.  Additionally, they may encourage clinicians to focus on only a few features of the examination.  A more thorough approach is to perform a complete examination and then compare it to patterns expected for various types of shock (table below).  This may facilitate identification of patients with multifactorial shock, who will often defy simple categorization.


Clinical context is useful as well.  For example, hypovolemic and distributive shock may appear nearly identical on ultrasound (table above).  Other clinical findings may help make this distinction:


Finally, archival information can also be critical.  It is increasingly common to see patients with acute disease superimposed on chronic problems.  Many people are walking around with dilated right ventricles or severely reduced ejection fraction every day.  Evaluation of prior echocardiograms, EKGs, and CT scans may help determine if such features are acute or chronic (noting that chest or abdominal CT scans often reveal useful information about cardiac anatomy).

In conclusion, shock evaluation is hard work.  It starts with a thorough examination of the heart, lungs, and other relevant organs (e.g. DVT study if PE is suspected).  This must then be integrated with the clinical context including history, traditional examination, and any available diagnostic tests.  Finally, reviewing archival material can be crucial to confirm that pertinent abnormalities are truly part of an acute disease process.  Although this is a not easy, it will often result in prompt and unexpected diagnoses, which can be life-saving. 

For the complete article:  Farkas JD and Anawati MK.  Bedside Ultrasonography Evaluation of Shock.   Hospital Medicine Clinics 2015; 4(2).   



What is the evidence behind the IVC filter?

$
0
0
Introduction

Until recently, recommendations regarding IVC filters have been based predominantly on a singleRCT (PREPIC-1).  Last week, a second RCT was released in JAMA (PREPIC-2).  This post will review both studies.  What is the evidence basis for using IVC filters?

PREPIC-1 (Decousus et al.  A clinical trial of vena caval filters in the prevention of pulmonary embolism in patients with proximal deep vein thrombosis.  NEJM 1998; 338:409)

This was a prospective multi-center non-blinded RCT involving 400 patients with proximal DVT who were considered by their physicians to be high risk for pulmonary embolism.  All patients were anticoagulated with heparin followed by warfarin (1).  Half were randomized to receive a permanent IVC filter.  The study was intended to include 800 patients, but was stopped prematurely due to slow recruitment. 

At baseline, patients underwent ventilation-perfusion scanning (VQ scan) with invasive pulmonary angiography "strongly recommended" as well.  Imaging was repeated at 12 days in all patients, and additionally whenever there was a suspicion for a new pulmonary embolism (typically with VQ scan first, followed by pulmonary angiography if there was a concern regarding new PE on the VQ scan). 


The short-term results are shown above.  At baseline, half of the patients were found to have a PE.  Among patients receiving an IVC filter, there were fewer new PEs at twelve days, a difference driven largely by the rate of asymptomatic PE.


Two year follow-up is shown above.  There was no significant benefit in terms of mortality or symptomatic pulmonary embolism.  An increase in recurrent DVT was noted in patients who received a filter.  37 patients with recurrent venous thromboembolic disease were evaluated for filter patency, among whom there was a 43% rate of IVC filter thrombosis. 

The authors chose the 12-day outcomes as the primary endpoint.  Therefore, they concluded that this was a positive study proving that IVC filters reduce the incidence of pulmonary embolism.  There are two problems with this argument.

First, it is unclear that the difference in pulmonary embolism at 12 days is statistically significant.  Analysis with a Fisher Exact test reveals p=0.06 (figure below).  The reason the authors calculated p=0.03 may relate to the use of a different statistical test and/or 28 patients being excluded from this analysis (2).  Regardless, this blog has previously explored how p-values over-estimate the strength of evidence, so even if p=0.03 this is not definitive evidence. 

Second, twelve-day pulmonary embolism rate is an unusual choice for the primary outcome.  This endpoint was largely driven by a difference in asymptomatic emboli detected using an aggressive screening protocol (including both ventilation-perfusion scans and invasive angiography).  This approach may have detected small and clinically insignificant emboli.  

In conclusion, from the perspective of patient-centered outcomes this was a negative study.  There was no difference in mortality, bleeding, or symptomatic pulmonary emboli.  There was a higher rate of recurrent DVT among patients receiving an IVC filter.  The finding of a lower rate of pulmonary embolism at twelve days is of unclear statistical or clinical significance, since it was largely due to asymptomatic emboli.  Overall, the evidence of harm (DVT and filter occlusion) is more persuasive than any evidence of benefit.  The authors acknowledge this in the conclusions of the paper, stating "because of the observed excess rate of recurrent DVT and the absence of any effect on mortality among patients receiving filters, their systemic use cannot be recommended." 

PREPIC-1 Follow-Up Study (Eight-year follow-up of patients with permanent vena cava filters in the prevention of pulmonary embolism.  Circulation 2005; 112: 416-422)

The subjects continued to be followed up for eight years, including annual telephone calls to review any symptoms of pulmonary embolism, with reminders to pursue testing if patients experienced such symptoms.  Meanwhile the standards for diagnosing a "new" pulmonary embolism were relaxed.  In the original PREPIC-1 study, in order to diagnose a new pulmonary embolism, two studies of the same type were compared to prove a truly new finding (e.g. VQ scan vs. prior VQ scan or angiography vs. prior angiography).  In this follow-up study, it was possible to diagnose a new PE based on a positive CT-angiogram compared to a prior VQ scan.  Diagnosis of a new PE could even be based on an abnormal chest radiograph "if there was strong clinical evidence of pulmonary embolism and associated acute proximal deep-vein thrombosis."

The results are shown below.  There was no survival benefit.  Patients who received a filter were more likely to have a new DVT and less likely to have a new PE.


This study certainly supports that IVC filters may reduce the risk of non-fatal PE.  However, it may be limited by bias.  Patients were aware of whether or not they received an IVC filter.  Furthermore, the results of the original study were released while the follow-up study was being performed.  Therefore, the patients' physicians and some of the patients themselves were likely aware of these results.  This may have caused a greater level of anxiety about PE and higher intensity of investigation among patients who did not receive an IVC filter.  The authors noted that "it is possible that the diagnosis of pulmonary embolism may have been underestimated in patients in the filter group, because local clinicians tend to suspect pulmonary embolism less frequently in patients with a filter than in patients without a filter."  

PREPIC-2  (Effect of a retrievable inferior vena cava filter plus anticoagulation vs. anticoagulation alone on risk of recurrent pulmonary embolism: A randomized clinical trial.  JAMA 2015; 313: 1627)

This was a prospective multicenter non-blinded RCT involving 399 patients with acute symptomatic PE, DVT, and at least one criterion for severity (age >75, active cancer, chronic cardiac or respiratory insufficiency, recent ischemic stroke with leg paralysis, DVT involving the iliocaval segment or occurring bilaterally, right ventricular dilation, elevated BNP, or elevated troponin).  Patients were all anticoagulated for at least six months, and randomized whether to receive a retrievable IVC filter as well (with the intention of removing the IVC filter after three months).  A new PE was defined as the interval appearance of a new abnormality on CT angiography, invasive angiography, VQ scan, or autopsy. 


Results at three months and six months are shown above.  Filter insertion was accomplished in 193/200 patients in the filter group, of which 153/193 were removed after three months.  There were no statistically significant differences, with trends towards increased recurrent PE and increased mortality among patients receiving IVC filters.  Although no definite conclusion can be reached from these results, they make it less likely that IVC filters confer any substantial benefit. 

Conclusions

The number of guidelines and position papers on IVC filters greatly outweighs the actual evidence supporting these devices.  Aside from some smaller studies, the PREPIC series are the primary RCTs investigating IVC filters.

IVC filters offer little benefit for patients who can tolerate anticoagulation.  They carry no mortality benefit and may lead to a variety of filter-related complications (e.g. filter fracture, IVC perforation, filter thrombosis, and filter migration).  IVC filters seem to increase the risk of DVT.  At this point it is unclear whether IVC filters truly reduce the risk of PE.  Overall, for patients who can tolerate anticoagulation, management should focus on optimizing medical therapy (i.e. drug selection, correct dosing, adherence, and therapeutic monitoring).  This is consistent with the most recent 2012 Guidelines from the American College of Chest Physicians, which recommend against IVC filter placement in patients who can receive anticoagulation.

Many questions remain unanswered.  Comparison of different filter types remains unclear.  The primary unanswered question may be whether IVC filters benefit patients who cannot tolerate anticoagulation.  Although this is widely recommended, it is based largely on indirect evidence from PREPIC-1.  The risks of DVT and filter thrombosis, already established in the setting of anticoagulation, are probably greater in patients who are not anticoagulated.  PREPIC-2 casts some doubt on whether IVC filters reduce the incidence of PE, which may push this question towards a point of equipoise.  As usual, further evidence is needed.


  • For patients receiving anticoagulation, IVC filters do not improve mortality and may increase the risk of DVT and filter-related complications (e.g. filter thrombosis, migration, or fracture).
  • It is unclear whether IVC filters reduce the risk of new PE.  PREPIC-1 and PREPIC-2 both suggest that there is little or no short-term reduction (e.g. within 6-24 months).  Long-term follow-up of the cohort from PREPIC-1 found a reduced risk of new PE at eight years, but this could have been subject to some bias. 
  • Currently evidence and guidelines from the American College of Chest Physicians both suggest that there is no role for IVC filters among patients who can receive anticoagulation.  

Must-read article: 

Prasad V, Rho J, Cifu A.  The IVC Filter: How could a medical device be so well accepted without any evidence of efficacy?  JAMA Internal Medicine 2013; 173(7) 493-495.

Notes

(1) Patients were also randomized to receive a heparin infusion or low molecular-weight heparin in a two-by-two factorial design. 

(2) 28 patients were not analyzed at the 12-day timepoint: four patients died of other causes and in 24 patients follow-up studies "could not be performed or were not interpretable."  These patients were excluded from data analysis.  The study does not report how these 28 patients were distributed between the two groups, making it difficult to replicate their statistical analysis.



Apneic ventilation using pressure-limited ventilation

$
0
0
 
Introduction

Noninvasive ventilation (i.e. BiPAP) is arguably the most powerful approach to optimize oxygenation and ventilation before intubation, given its ability to provide 100% FiO2, PEEP, and ventilatory support.  The only way to improve upon this is to extend the administration of positive pressure ventilation throughout sedation and paralysis, right up until the moment of intubation.  Either a mechanical ventilator or some BiPAP machines can easily be set to deliver ventilator-triggered breaths after the patient becomes apneic.  This is similar to manually bagging the patient, but using a machine improves precision and safety.  Although unnecessary for most patients, apneic ventilation may be useful for patients at high risk of hypoxemia or acidosis. 

Nuts & bolts

Apneic ventilation using a BiPAP machine with Spontaneous/Timed mode (S/T Mode)


Some newer BiPAP machines (e.g. Phillips BiPAP Vision and Phillips Respironics V60) can be set in a "spontaneous/timed" mode (S/T mode).  As long as the patient is breathing at a rate higher than the set rate, S/T mode is identical to BiPAP.  However, if the patient's respiratory rate drops below the set rate, machine-triggered breaths will be delivered (functioning identically to a traditional mechanical ventilator set on pressure-controlled ventilation).  Apneic ventilation can also be performed by connecting a facemask to a traditional mechanical ventilator set to provide pressure-controlled ventilation.

Transitioning from spontaneous breathing to machine-triggered ventilation

The transition onto ventilator-supported breathing may be seamless.  While the patient is breathing spontaneously, the machine can be set at a rate 5-10 breaths/minute below the patient's respiratory rate.  This will have no effect until apnea occurs, when the machine will immediately begin providing pressure-controlled ventilation.  At this point, the respiratory rate may be increased to the optimal rate for machine-triggered ventilation (e.g. 30 breaths/minute, as discussed below). 

Keep the airway open

Whether performing apneic oxygenation or apneic ventilation, nothing works if the airway is occluded (e.g. due to the tongue falling backwards after paralysis).  One advantage of apneic ventilation is that it provides a continuous monitor of whether the airway is open.  The machine will display the tidal volumes that the patient is receiving.  Following paralysis, the tidal volumes will fall, but they shouldn't fall to zero.  If the tidal volumes fall very low, this suggests airway occlusion.  Usually, patient positioning (i.e. ear to sternal notch) plus simple airway maneuvers (i.e. head tilt and chin lift) may open the airway.  

Machine settings to optimize oxygenation

Physiology of recruitment: Understanding transpulmonary pressure

When we think about using positive pressure to recruit the lungs, we generally think about PEEP.  However, PEEP is only part of the story.  For example, let's imagine a woman with severe ARDS who is placed on BiPAP 15cm/5cm.  The pressure which is opening her alveoli is the transpulmonary pressure, which equals the difference between her alveolar pressure and pressure in her pleural cavity.  This is equal to the positive pressure from BiPAP mask minus the pressure generated by her diaphragm (1):


In this scenario, lung recruitment only occurs during inspiration, when her transpulmonary pressure is +30cm.  During exhalation, her transpulmonary pressure is -3cm, which will de-recruit her lungs.  Thus, the primary factors opening up her lungs are actually the peak pressure of the BiPAP machine (+15cm) and negative pressure produced by her diaphragm (-15cm), not the PEEP. 

The single most important number influencing her recruitment may be her mean transpulmonary pressure.  For example, if she spends most of her time during inspiration, then her mean transpulmonary pressure will be closer to +30cm.  Alternatively if she has a lower respiratory rate and is taking shorter breaths, then her mean transpulmonary pressure will be closer to -3cm. 
  

Now let's imagine that she is paralyzed prior to intubation.  While apneic, her trans-pulmonary pressure may be stable at around 5cm.  (Note that if she were obese, her diaphragm would compress her lungs during apnea, producing a transpulmonary pressure <5cm).  We have just taken away her inspiration, which was recruiting her lungs with a pressure of +30cm.  Without these bursts of pressure during inspiration, her lungs may collapse before intubation. 

How to set the BiPAP machine during apnea to optimize trans-pulmonary pressure

The transpulmonary pressure may be approximated using the equations below (2).  The driving pressure is equal to the Peak Pressure minus the PEEP, which is the pressure differential that drives gas into the lungs with each breath (thus determining how large each tidal volume is).


The peak pressure must be limited to avoid gastric insufflation.  This establishes a tradeoff between the PEEP and the driving pressure: the higher the driving pressure is, the lower the PEEP must be.



For a patient with severe hypoxemia, we generally want to maximize the transpulmonary pressure even if this occurs at the cost of reducing ventilation.  This may be achieved by increasing the PEEP.     

The other approach to improve transpulmonary pressure is to increase the percent of the time that the patient will spend during inhalation.  This may be done either by increasing the respiratory rate, or by increasing the inspiratory time of each breath.  The best way to do this is to increase the respiratory rate, because this will simultaneously improve oxygenation and ventilation.  A respiratory rate of 30 breaths/minute with a one-second inspiratory time will cause half of the time to be spent during inspiration (Inspiration : Expiration ratio of 1:1).  This is the highest fraction achievable with a Respironics BiPAP machine (3). 

Therefore:

Machine settings to optimize ventilation

Less commonly than hypoxemia, we encounter patients with severe metabolic acidosis and a compensatory respiratory alkalosis (e.g. diabetic ketoacidosis or salicylate intoxication).  These patients are at risk of acidosis in the peri-intubation period because we are taking away their compensatory respiratory alkalosisand potentially replacing it with a respiratory acidosis.  Intubation of such patients should be avoided if possible (discussed previously on the post about DKA).  However, sometimes it is unavoidable.  In such situations, all efforts must be made to maintain the PaCO2 as low as possible throughout the peri-intubation period.

Often such patients have normal lungs, in which case the ventilator settings can be set to maximize ventilation.  This may be achieved by maximizing the driving pressure and decreasing the PEEP to zero (4).

Therefore:

Safety and maximal peak pressure

Interposing ventilation between paralysis and intubation is controversial.  Some would argue that true rapid sequence intubation (RSI) involves no ventilation, with any ventilation increasing the aspiration risk.  However, for a patient at high risk of desaturation it may be safer to perform apneic ventilation up-front in a controlled fashion, thereby extending the safe apnea time and increasing the likelihood of first-pass success.  Providing no ventilation up-front often results in the patient desaturating and requiring urgent manual bagging.  With eitherapproach, apneic ventilation occurs; it's simply a matter of timing and control.  Pressure controlled ventilation has been shown to result in lower peak pressures compared to manual ventilation, implying greater safety (Goedecke 2004).

The maximal pressure that may be safely applied without insufflating the stomach and causing regurgitation is unclear.  Previous studies based on auscultating the stomach have suggested that pressures <20-25cm are safe.  A recent prospective, randomized, double-blind study using ultrasonography to evaluate gastric insufflation found that 15cm provided the ideal balance between avoiding gastric insufflation and providing adequate ventilation (Bouvet 2013)(6).  The safety and efficacy of apneic ventilation using pressure-limited ventilation with a peak pressure of 15cm has been validated in elective surgical patients (Joffe 2010).

Advantages of pressure-limited ventilation

In general, mechanical ventilation can either be volume-limitedor pressure-limited.  With volume-limited ventilation, the tidal volume is set by the practitioner and the pressure will vary depending on the lung compliance.  Alternatively, with pressure-limited ventilation, the peak pressure is set by the practitioner and the volume will vary depending on the lung compliance.  Similar results can generally be achieved with either mode.  However, for the purpose of apneic ventilation, pressure-limited ventilation has some unique advantages:

Pressure-limited ventilation guarantees a safe inspiratory pressure.

Using a pressure-limited mode, as long as the inspiratory pressure is set at a safe level (i.e., 15 cm), this will guarantee that unsafe levels of pressure never occur.  Alternatively, if volume-limited ventilation is used, then high pressures can occur.  Volume-limited ventilation requires close monitoring with adjustment of tidal volumes to avoid dangerous pressures, a complex task requiring ongoing attention. 

The trade-off here is that the tidal volume is not guaranteed, so some patients may receive low tidal volumes.  Choosing pressure-limited ventilation thus prioritizes safety over efficacy.  Given that a normal minute ventilation is not mandatory during apnea, and that aspiration can be a major problem, this is a sensible trade-off. 

Pressure-limited ventilation maximizes the efficiency of inspiration.

Compared to volume-limited ventilation, pressure-limited ventilation will maximize the tidal volume for a given peak pressure (e.g. 15 cm).  With volume-limited ventilation, the airway pressure only reaches the peak pressure at the very last moment (figure below).  In contrast, with pressure-limited ventilation the airway pressure is equal to the peak pressure throughout inspiration.  Since pressure-limited ventilation maximizes the driving pressure throughout the entire breath, it will achieve a higher tidal volume compared to volume-limited ventilation with the same peak pressure (5). Seet 2009 confirmed this, demonstrating that pressure-limited ventilation achieved the same tidal volume as volume-limited ventilation despite using a lower peak pressure. 


When should apneic ventilation be considered? 

Although apneic ventilation is a useful tool for the toolbox, it is only occasionally needed.  Patients who may benefit the most include those with profound hypoxemia, severe metabolic acidosis, or morbid obesity.

For a patient requiring intubation who is already on a BiPAP machine capable of delivering apneic ventilation, this should be considered.  The primary drawback of apneic ventilation is the logistics of connecting the patient to noninvasive ventilation.  In this situation, apneic ventilation can be accomplished by pushing a few buttons on the BiPAP machine. 


  • Ventilatory support up until the moment of intubation may be easily provided using more sophisticated BiPAP machines (which can be set to provide backup respirations as soon as the patient stops breathing) or a mechanical ventilator.
  • Continuing ventilator support until intubation improves oxygenation and ventilation during paralysis.  This may be useful for patients at high risk of hypoxemia (due to lung collapse) or acidosis (due to metabolic acidosis).
  • Expert mask-ventilation technique is critical to maintain an open airway after paralysis. 
  • Use of pressure-limited ventilation during apnea guarantees avoidance of high inspiratory pressures that could cause gastric distension and aspiration. 
  • Patients with hypoxemia may benefit more from PEEP, whereas patients at risk from hypercapnia may benefit more from higher driving pressures.  The following is a rough guide to setting up apneic ventilation for different types of patients: 



Disclosures: I have no conflicts of interest nor any relationship with drug or device manufacturers.  

Notes

(1) Please note that in general exhalation is usually a passive process with no diaphragmatic activity.  However, in the setting of respiratory distress it may become an active process.  Also note that this patient's diaphragmatic pressures cannot be measured, and these are simply what I am imagining they might be.  Finally, note that this is a simplification which assumes zero airway resistance (such that alveolar pressure is equal to airway pressure).   

(2) Note again that this ignores any passive pressure exerted by the diaphragm to compress the lungs during apnea, for example due to pregnancy or obesity. 

(3) In theory, the respiratory rate could even be increased further, to achieve inverse ratio ventilation (inspiratory time > expiratory time), similar to the concept of airway pressure release ventilation (APRV).  However, the Respironics BiPAP machines will not allow inverse ratio ventilation (in most routine situations, inverse-ratio ventilation would result from an operator error and would be undesirable).  Using a complete mechanical ventilator, inverse ratio ventilation could be used to further improve oxygenation (although this benefit might occur at the cost of impaired ventilation).

(4) The ideal respiratory rate to maximize ventilation is unclear.  Minute ventilation is proportional to respiratory rate, so in general increasing respiratory rate is beneficial.  However, if respiratory rate is increased too much, then there will be insufficient time for the lungs to fill and empty with each breath, causing the tidal volume to fall.  A respiratory rate of about 30 breaths/minute may be a reasonable compromise.  

Note also that typical recommendations for respiratory rate during manual bagging (i.e. limiting the respiratory rate to 10-12 breaths/minute) are designed to take into account that manual bagging is a volume-limitedprocess with a risk of progressive accumulation of excess gas in the chest (which may cause the intrathoracic pressure to spiral out of control).  Using pressure-limited mechanical ventilation, it is impossible for this to happen. 

(5) For intubated patients, minimizing the peak pressure doesn't matter so much (because the plateau pressure is more important than the peak pressure).  However, for mask ventilation of non-intubated patients, the peak pressure is critically important to avoid gastric insufflation.  Therefore, the lower peak pressures achievable with pressure-limited ventilation becomes a significant advantage. 


(6) Note that there has been no evidence directly linking the level of pressure to clinical aspiration.  It is possible that a minimal amount of gastric insufflation (e.g. as detected by ultrasound) may be clinically irrelevant.  Thus, it is possible that higher pressures (e.g. 20-25cm) are safe.  Unfortunately it is unlikely that a study relating pressure to clinical aspiration will be done, because this would require a very large sample size. 


Top 10 reasons to stop cooling to 33C

$
0
0
Introduction

Following the Nielsen study, many hospitals developed two protocols for temperature management after cardiac arrest (33C or 36C).  For example, the 36C protocol could be used for patients with contraindications to hypothermia (33C). With ongoing evidence emerging about hypothermia, many hospitals are abandoning their 33C protocols and using 36C for all post-arrest patients.  Although this may be old news in some locations, it remains highly controversial in the USA.  We present our opinions below, while recognizing that experts and esteemed institutions lie on both sides of this debate.

Reason #10  Focusing on depth of hypothermia may distract from the importance of duration of temperature management.

Most of the benefit of temperature management is probably due to avoidance of fever.  Thus, the duration of temperature management may be more important than the exact target temperature.  Unfortunately, excessive focus on the target temperature often overshadows the importance of the duration of temperature management.  In the past we have seen patients cooled to 33C and rewarmed over a 36-hour period, at which point the cooling pads were removed with a subsequent fever.  In efforts to maximize the "dose" of temperature management, it may be more beneficial to extend the duration of temperature management rather than lowering the target temperature.

Reason #9  Therapeutic hypothermia increases the risk of infection.

Hypothermia suppresses immune function and is associated with increased rates of bacterial infections, particularly pneumonia (Kuchena 2014).  This is a real problem, with pneumonia rates as high as 50% in some studies.  Although pneumonia has not been linked to mortality or neurologic outcomes, it may prolong the duration of mechanical ventilation and increase ICU length of stay. 

Reason #8  Therapeutic hypothermia may aggravate Torsade de Pointes.

Although uncommon, some patients present with cardiac arrest due to Torsade de Pointes (TdP).  Hypothermia causes bradycardia, QTc prolongation, hypokalemia, and hypomagnesaemia - all of which may promote the recurrence of TdP.  We have seen cases where TdP seemed to be aggravated by hypothermia, and this has also been reported in the literature (Huang 2006, Matsuhashi 2010).  It is difficult to avoid cooling patients with TdP, because the diagnosis of TdP may not be obvious initially and most hypothermia protocols are silent on this issue. 

Reason #7  Therapeutic hypothermia may compromise hemodynamics.

Therapeutic hypothermia may cause bradycardia and reduced contractility, causing reduced cardiac output and blood pressure (e.g. table below from the Nielsen study below).  Although this can usually be compensated for with vasopressors, it leaves patients with less physiologic reserve if their hemodynamics should deteriorate further.  Occasionally patients with refractory shock may require early rewarming.


The effect of hypotension on cerebral perfusion pressure is concerning.  Although hypothermia reduces intracranial pressure, it is likely that many of these patients still suffer from elevated intracranial pressures (ICP).  The combination of hypotension and elevated ICP could produce very low cerebral perfusion pressures (CPP).  Although hypothermia protocols often prescribe elevated blood pressure targets empirically to support the cerebral perfusion pressure, in practice this is often difficult to achieve. 

Recently a post hoc analysis of the Nielsen trial by Annborn et al. showed a trend towards increased mortality among patients who were cooled to 33C in the presence of shock (figure below).  In summary, hypothermia worsens hemodynamics and this could lead to worse outcomes, particularly among patients with shock.  


Reason #6TherapeuticHypothermia delays accurate neuroprognostication.

The process of cooling to 33C impairs our ability to accurately neuroprognosticate in nearly every way.  Sedatives and analgesics required to facilitate hypothermia and suppress shivering can delay the resumption of consciousness, confounding clinical neuroprognostication and prolonging the duration of mechanical ventilation.  Most other diagnostic tools are affected by cooling as well.  For example, somatosensory evoked potentials can be suppressed and have been shown in multiple case reports to return to normal several days after rewarming.  Biomarkers, particularly neuron specific enolase, are probably attenuated with hypothermia and correlate poorly with outcome in this setting.  Delays in neuroprognostication may place an excessive psychological stress on families forced to wait longer to see if their loved one will awaken.

Reason #5  Withdrawal of care following induced hypothermia can be ethically problematic.

Having embarked on a course of therapy which temporarily incapacitates the patient, there is an ethical obligation to complete the treatment course.  For example, a surgeon would not withdraw care in the middle of an operation.  Hypothermia to 33C may delay resumption of consciousness for some days.  For example, Mulder et al. 2014 reported that among patients treated with hypothermia who had a good neurologic outcome, 32% required over 72 hours to awaken.  If family members wish to withdraw care in the interim, this is ethically problematic.  It is possible that our interventionof cooling the patient to 33C could deprive the patient of the opportunity to wake up prior to terminal extubation.

Reason #4  Cognitive Offloading: Reducing focus on therapeutic hypothermia may allow us to focus more on other aspects of patient care.

Patients who have cardiac arrest are diverse and extremely ill.  These patients may have a variety of underlying processes, including myocardial ischemia, pulmonary embolism, asthma, septic shock, etc.  The presence of multiple protocols (33C and 36C) as well as the complexity of the 33C protocol may cause clinicians to focus extensively on the approach to temperature management.  This may distract clinicians from other issues, such as diagnosing and managing the underlying cause of cardiac arrest.

Reason #3  We don't fully understand what happens to the body at 33C.

Every enzyme in the body is evolutionarily optimized to function best around normal body temperature.  Hypothermia will therefore simultaneously affect every metabolic and signaling pathway.  Harmful processes will be slowed down, but so will restorative and beneficial processes.  The net effect is unclear.  The consequence of slowing down every enzyme in the human body defies prediction or understanding. 

Reason #2  Therapeutic hypothermia to 33C may be less effective in real-world settings than in clinical trials.

Cooling to 33C is a very complex and context-dependent intervention.  Its efficacy and safety depend on how well it is performed.  For example, in a small community hospital induction of hypothermia may consist of packing a patient in ice before loading them in an ambulance to transfer to a referral center.  Alternatively, at a regional referral ICU, induction of hypothermia may be accomplished with sophisticated temperature-management devices, precise electrolyte control, and careful attention to hemodynamics and cardiac rhythm. 


The studies demonstrating mortality benefit from cooling to 33C (HACA and Bernard et al.) were both performed at top research hospitals on patients presenting initially through the emergency department.  It is unclear how this may generalize to other hospitals, or to patients who are cooled prior to inter-hospital transfer.  Kim 2014showed that prehospital cooling caused higher rates of re-arrest in the field, suggesting a potential danger if cooling is not done correctly.  Morrison et al. just released a study showing that a quality improvement project which increased utilization of cooling to 33C correlated with a trend towards reduced survival to hospital discharge.  These studies raise questions about how safe cooling to 33C is outside of major clinical trials.  Since cooling to 36C is easier and safer, it probably performs better across various settings. 

Reason #1  The main reason that 33C is still being used may be status quo bias.

Currently there is no clinical evidence that 33C is superior to 36C.  Compared to 36C, 33C has a variety of additional risks and is more technically challenging.  The continued use of cooling to 33C is an example of status quo bias (discussed further by the Medical Evidence Blog).  There is a tendency to stick with established treatments, the tried-and-true.  We have worked hard for years establishing protocols and expertise in cooling patients to 33C.  When patients did well we attributed it to the hypothermia, but when they did poorly we said "well, they would have done poorly anyway" (circular logic reinforcing the status quo).  It is hard to challenge this status quo that we have strived so hard to achieve. 

Imagine, for a moment, how history might have been different if the Nielsen, HACA, and Bernard studies had all been published simultaneously in 2002.  The accompanying editorial surely would have concluded that avoidance of fever was the critical intervention.  It is difficult to imagine that there would have been any enthusiasm for cooling to 33C in that scenario.  Thus, our current practice is shaped more by inertia than by an unbiased accounting of all available evidence. 

Lack of status quo bias might also help explain why every center involved in the Nielsen trial immediately moved to a 36C target after the conclusion of the trial (Nielsen 2015).  During the trial, the status quo of cooling every patient to 33C was inadvertently destroyed.  This might have freed these centers to make a decision without bias based on prior practice patterns. 

Conclusions

The initial studies which launched therapeutic hypothermia (the HACA trial and Bernard et al.) did for post-arrest patients what the Rivers trial did for septic patients.  Instead of being ignored for a few days on the ventilator, post-arrest patients became the focus of intensive multidisciplinary management with a focus on preventing secondary brain injury.  We have seen this aggressive management approach improve outcomes.

Over time, our approach to critical care has evolved.  The PROCESS, ARISE, and PROMISE trials have informed us that many components of the Rivers protocol are unnecessary.  Similarly, the Nielsen study has informed us that we can obtain the same results while targeting a more physiologic temperature.

We remain steadfast in our dedication to immediate, precise, and intensive resuscitation of post-arrest patients.  We are not suggesting a reduction in the energy invested in these patients, but rather that such energy may be invested more wisely in other aspects of patient care.  Rather than focusing excessively on the target temperature, it may be more important to thoroughly investigate and manage the etiology of the arrest.  It is possible that the duration of temperature management could be more important than the actual target temperature, but this aspect often receives less attention.  Meanwhile impeccable supportive care must be maintained with close attention to all organ systems.


Coauthored with Ryan Clouser (@neurocritguy), a colleague with expertise and board certification in Neurocritical Care.  This post is based on a presentation by Dr. Clouser at Medicine Grand Rounds.  

Disclaimer: These are our personal opinions and do not reflect our employers or institution (full disclaimers here).  

Conflicts of interest: None.

Pneumonia, BiPAP, secretions, and HFNC: New lessons from FLORALI

$
0
0
 
Introduction

Pneumonia is extremely common.  Nonetheless, there is surprisingly little evidence about supporting pneumonia patients using bi-level positive airway pressure (BiPAP) or high-flow nasal cannula (HFNC).  The recent FLORALI study offers new insight into this.  This post will explore how BiPAP and HFNC compare for pneumonia patients, prior evidence, and the FLORALI study.

Physiology: Comparison of BiPAP vs HFNC in pneumonia

BiPAP and HFNC are the primary techniques available to provide noninvasive support of oxygenation and ventilation in pneumonia.  Some important differences are as follows.  Please note that unless otherwise indicated, "BiPAP" is used here to refer to BiPAP delivered via a facial mask.

  • Oxygenation:  Both devices can provide close to 100% FiO2.  HFNC can provide a small and variable amount of PEEP (perhaps ~5cm, depending on the flow rate and how snugly the nasal prongs fit into the patient's nose).  BiPAP can provide a greater amount of PEEP in a more precise fashion.
  • Work of Breathing:  HFNC may wash out the anatomic deadspace, thereby reducing the work of breathing (explained previously here).  BiPAP can provide higher inspiratory pressures, and at high settings may provide the majority of the work of breathing.
  • Secretion clearance: This is essential in the setting of pneumonia to prevent mucus plugging and remove purulent material from the lungs.  BiPAP typically impairs secretion tolerance, whereas HFNC does not seem to.
  • Monitoring: BiPAP can impair patient monitoring by interfering with speech and observation of facial expressions.  Additionally, when patients get anxious on BiPAP, it can be confusing to tell whether this is claustrophobia from the mask or respiratory exhaustion.  HFNC facilitates communication and early detection of patients who are failing and require intubation. 

There are theoretical advantages and drawbacks of both modalities.  BiPAP can provide greater oxygenation and ventilation support.  However, BiPAP carries risks of mucus plugging, aspiration, and impaired patient monitoring.  Clinical evidence is needed to determine which technique is better. 

Evidence before FLORALI

Confalonieri et al. 1999 American Journal of Respiratory and Critical Care Medicine

This was a prospective RCT of 56 patients with severe pneumonia.  41% of patients also had COPD.  All of the patients with COPD had hypercapnia, as did two patients without COPD.  Exclusion criteria included “inability to expectorate,” although it is unclear how this was determined.  Patients were randomized to BiPAP vs. conventional oxygen therapy.

BiPAP caused a reduction in intubation rates (21% vs. 50%, p=0.03) and ICU length of stay (2 days vs. 6 days, p=0.04).  This was driven primarily by the patients with COPD, among whom there was reduced mortality with the use of BiPAP (table below).  


Interpreting this study is difficult.  Generally post-hoc subgroup analysis is frowned upon.  However, the patients in this study were sharply divided:  half had COPD and were uniformly hypercapneic, whereas the other half didn't have COPD and generally were not hypercapneic.  Thus, sub-group analysis seems reasonable.  This study is often interpreted as showing a benefit among patients with COPD and pneumonia, but not among patients with only pneumonia.  

Ferrer et al. 2003American Journal of Respiratory and Critical Care Medicine

This was a prospective RCT of 105 patients with severe hypoxemic respiratory failure who were randomized to receive supplemental oxygen versus BiPAP.  Unlike Confalonieri et al., patients with hypercapnia were excluded.  Patients treated with BiPAP were allowed breaks to improve tolerance or clear secretions.  

BiPAP caused a reduction in intubation rate by about 50%, an effect which was also observed in the subgroup of patients with pneumonia (table below).  BiPAP also caused a reduction in ICU mortality. 


What accounts for the difference in results compared to Confalonieri?  One possibility is random chance (for example, if two patients in the pneumonia group had different outcomes this would have shifted the p-value to >0.05 for this subgroup).  Another possibility is that the ICUs in this study were exceptionally good at providing BiPAP.  When patients were having difficulty tolerating the BiPAP, rather than intubation patients were provided with breaks from BiPAP to facilitate secretion clearance. 

Descriptive studies

Observational studies describe mixed results for treating pneumonia with BiPAP.  Compared to other types of respiratory failure, pneumonia is often a risk factor for BiPAP failure, with failure rates often ~50% (Ferrer 2015).  Thus, the 26% failure rate reported by Ferrer et al. 2003 may not be replicable in general practice. 

Summary of prior evidence

Most notable is the lack of evidence regarding BiPAP for patients with pneumonia.  Confalonieri and Ferrer are the largest applicable RCTs, but even these studies combined include only 67 patients with only pneumonia.  The two studies suggest conflicting results, possibly due to different techniques in application of BiPAP.  There are no prior RCTs investigating HFNC in pneumonia. 

FLORALI study (Frat et al. NEJM 2015)

This is a prospective randomized trial of BiPAP vs. HFNC vs. non-rebreather facemask for patients with acute hypoxemic respiratory failure.  Inclusion criteria included respiratory rate >25 breaths/min, hypoxemia (PaO2/FiO2<300), absence of hypercapnia, absence of underlying chronic respiratory failure, hemodynamic stability, and GCS>12.  HFNC was performed using large-bore nasal prongs at 50 liters/minute flow and continued for at least 2 days.  BiPAP was applied for at least 8 hours/day for two days, with HFNC applied during breaks in the use of BiPAP. 

310 patients were recruited, of whom 82% had pneumonia.  Intubation and survival curves are shown below.  The hazard ratio for death at 90 days was 2.01 for face-mask versus HFNC (p=0.046) and 2.50 for BiPAP vs. HFNC (p=0.006).  Thus, compared to face-mask, HFNC reduced mortality by a factor of two, with an NNT of 10.  HFNC also caused a greater improvement in tachypnea, discomfort, and dyspnea scores compared to other therapies. 


Although compelling, this paper does have some weaknesses.  For example, 26 patients in the face-mask group and 14 patients in the HFNC group received BiPAP as “rescue therapy,” of whom 70% failed BiPAP and required intubation.  This was not a protocol violation, but was actually allowed within the study design.  Such cross-over may blur differences between groups, reducing observed effect size.

The primary weakness of the study may relate to precisely which diseases are being investigated.  First, it should be noted that the name of the study group is misleading (FLORALI = High FLow Nasal Oxygen in the Resuscitation of patients with Acute Lung Injury)(1).  This study is not an investigation of Acute Lung Injury (e.g. 21% of subjects had unilateral infiltrates, thus not meeting the definition of ALI).  However, many commentaries on the study have already been confused by this name, writing that this was a study of ALI.  To confuse matters further, technically the term "ALI" has now been replaced by "mild ARDS" according to the new Berlin definitions.

Nomenclature aside, this investigation is based on the assumption that all hypoxemic respiratory failure patients require the same treatment.  Which is probably wrong.  For example, patients with ARDS due to extrapulmonary sepsis may be more unstable and more likely to require intubation (Hess 2013, Ferrer 2015).  Combining heterogeneous groups of hypoxemic patients is probably not valid.  The great majority of patients had pneumonia (82%), so it is likely that these patients drove the study results.  Therefore, it might be most accurate to conceptualize this as a study of patients with severe pneumonia.  Given the low numbers of patients with each non-pneumonia diagnosis and lack of subgroup analysis, the results from this study are not necessarily generalizable to every patient with hypoxemic respiratory failure. 

Conclusions

Until recently, there has been little evidence upon which to base the selection of BiPAP vs. HFNC for patients with pneumonia.  FLORALI is a large RCT which constitutes the best available evidence on this topic.  In particular, it is the only RCT which provides a direct comparison between HFNC and BiPAP. 

The benefit of HFNC compared to non-rebreather facemask is not surprising.  HFNC provides powerful support of oxygenation and some support of ventilation as well.  It is extremely safe and well tolerated, with nearly no complications or contraindications.  In short, HFNC brings a lot to the table with almost no drawbacks. 

The trend towards harm from BiPAP is consistent with prior series showing a high failure rate of BiPAP in pneumonia.  This could relate to impaired secretion clearance.  It is conceivable that BiPAP via a nasal interface could perform better than BiPAP via a full facial mask, because nasal BiPAP interferes less with expectoration and communication.

Overall, the best evidence currently supports the use of HFNC as a first-line therapy for patients who have hypoxemia due to severe pneumonia and don't require immediate intubation.  This has been our practice at Genius General Hospitalfor years with good results.  Further studies are needed to confirm this and clarify exactly which patients benefit from HFNC. 

Both FLORALI and Maggiore 2014 (a RCT showing reduction in reintubation using HFNC discussed previously here) utilized HFNC at 50 liters/minute flow.  Although oxygenation may be maintained using lower flow rates, using a high flow rate provides more support for the work of breathing and perhaps some additional PEEP.

The optimal treatment for patients with a combination of COPD and pneumonia remains unclear.  The use of BiPAP in such patients is supported by a strong track record for BiPAP in COPD exacerbations, as well as evidence from Confalonieri et al.  The optimal approach may be determined on a patient-by-patient basis, depending on the dominant disease process, cough strength, and secretion volume.

BiPAP retains one unique advantage compared to HFNC in pneumonia, which is that it is more portable.  HFNC consumes an enormous amount of oxygen, typically requiring direct connection to a wall oxygen line.  BiPAP is a useful approach to support pneumonia patients temporarily during transportation.


  • FLORALI is a large RCT directly comparing HFNC vs. BiPAP vs. non-rebreather facemask for patients with hypoxemic respiratory failure (82% with pneumonia).  Until now there has been very little evidence about this. 
  • HFNC caused a reduction in mortality and days spent on invasive mechanical ventilation.  This supports the use of HFNC as the first-line approach to noninvasive support of patients with pneumonia.
  • In order to provide optimal support for the work of breathing, HFNC should probably be set at a high flow rate (i.e. 50 liters/minute flow) if tolerated. 
  • Use of BiPAP was associated with trends towards increased intubation rate and higher mortality.  This might be due to BiPAP interfering with expectoration of secretions, leading to mucus plugging.  

Related posts from this blog: 
Additional discussion of FLORALI study by: 


Notes

(1) The full name of the study group is the Clinical Effect of the Association of Noninvasive Ventilation and High Flow Nasal Oxygen Therapy in Resuscitation of Patients with Acute Lung Injury (FLORALI).  However, the acronym FLORALI is only derived from the second half of this name.  I guess they didn't like the sound of CEANVHFNOTRPALI. 

Dear NEJM: We both know that conflicts of interest matter.

$
0
0

Introduction

Recently the New England Journal of Medicine launched a media campaign challenging the negative perception of industry conflicts of interests (COI).  This was surprising, because it is the opposite of what editors of the NEJM have previously reported (see above books by former NEJM editors, published in 2004 and 2005).  Big pharma hasn't reformed dramatically in the last decade.  So why the change of heart?

History of the NEJM & COI

Some context is helpful.  In 1996, the NEJM editors Drs. Angell and Kassirer made it official policy of the journal that reviews and editorials could never be written by authors with financial COIs (Angell 1996).  However, these editors both left the NEJM, possibly related to a disagreement with their publisher's plans to use the NEJM's brand to promote other sources of healthcare information (Smith 2006).  Subsequently, Dr. Drazen was appointed as editor-in-chief of the NEJM in 2000, despite concern that he had ties to numerous drug companies (Sibbald 2000, Gottlieb 2000).  In 2002 the NEJM changed its policy toward COIs, allowing editorials and reviews to be written by authors with "insignificant" financial COIs (defined as <$10,000 per year from industry)(Drazen 2002). 

This policy reversal is best described by Dr. Kassirer:
"During the decade of the 1990s, when I was editor-in-chief of the New England Journal of Medicine, we rejected anyone who had a conflict of interest from writing an editorial or review article.  Sometimes it required going down the list until we found someone who didn't have a conflict, but we never had to compromise and accept someone without sufficient expertise to do a good job.  I also think it's often a good idea to get someone who isn't too close to the action:  it often avoids "group think" and provides a fresh perspective.  But to maintain our 1990s policy takes more work because you can't just accept the first person who pops into your mind.  I was disappointed when the journal changed the policy, and said so publicly."
 - Kassirer JP, British Medical Journal 2001
The current media campaign is a continuation of the direction that the NEJM set forth in 2002.  Perhaps the campaign represents a response to the British Medical Journal, which recently announced a new "zero tolerance" policy in which no financial COIs will be allowed for authors of editorials or reviews (Chew 2014). 


Current media campaign in the NEJM

This has consisted of a three-part series of articles by Dr. Rosenbaum, an editorialby NEJM editor-in-chief Dr. Drazen, and a reader poll.  The overall message is that we have grown overly suspicious of big pharma and COIs.  "We have forgotten that industry and physicians often share a mission - to fight disease."

Although Dr. Rosenbaum's articles make some valid points, they are quite one-sided.  Dr. Rosenbaum is a correspondent for the NEJM, so it is no coincidence that her articles are strongly supportive of the current policy initiatives of the NEJM.  Ironically, this exemplifies the significance of COIs.  Although it is impossible to know how much her position may have affected her perspective, her COI naturally challenges her unbiased opinion on the matter.

Perhaps the most interesting component of the media campaign is the reader poll about the adequacy of various hypothetical authors for a review article.  Three potential authors are described, all of whom have significant COIs.  The design of this poll itself is biased, by presenting no authors without COIs.  A more transparent approach might be to simply ask readers "do you think review article authors should be allowed to have COIs?" 

Industry funding of NEJM

The NEJM itself has significant financial conflicts of interest.  This may not come primarily from print and electronic advertisements by drug companies, but rather from industry purchases of article reprints.  For example, if the NEJM publishes an article supporting a new drug, the drug company will often purchase thousands of reprints of the article.  The NEJM makes a large profit margin from the reprints.  For example, Merck purchased 929,400 reprints of the infamous VIGOR trial of Vioxx, yielding an estimated income for the NEJM of $697,000 (Marcovitch 2012).  The Lancet editor Dr. Richard Horton reported that companies may promise to purchase a large order of reprints in return for publication of a favorable study.

It is impossible to determine how much money the NEJM makes from reprints.  Although the BMJ and Lancet disclosed their income from reprints, the NEJM and JAMA have not done so (table above).  Between 2005-2006, the sale of reprints contributed 41% of the total income of the Lancet.  The NEJM likely receives more revenue than the Lancet from reprints, given that it publishes more industry-supported studies than the Lancet (table below).  Combining revenue from advertising and reprints, it is likely that the NEJM receives most of its revenue from industry. 


In 2008, Dr. Drazen favorably reported the revenue from industry in a meeting of the Massachusetts Medical Society (publisher of the NEJM), saying "The results in recruitment advertising and bulk reprints were outstanding this year;  They went a long way to offset declines in print-based revenue that all publishers are experiencing." (BMJ 2011).

Conflicted nature of medical publishing

Like drug companies, medical journals have a conflicted set of incentives (Marcovitch 2010).  Certainly, any journal has lofty philosophical goals such as improving medical care.  However, the journal is also a news organization, and as such may be drawn to the hottest news stories.  Finally, any journal functions within a business model, requiring sufficient revenue to stay solvent. 
  

These incentives may be conflicting.  For example, journals often tout their impact factor, a measurement of how well they are read and cited.  Less publicized is the frequency of article retractions.  Compared to other journals, the NEJM has both the highest impact factor and also the highest frequency of retractions (Fang 2011).  This suggests that in the pursuit of hot articles, corners are sometimes cut.


Managing COI: Who should write review articles and guidelines?

There are two general concepts for approaching the authorship of NEJM review articles (and guidelines in general).  The traditional approach is the subject matter expert model (below).  In this model, a handful of experts are involved in performing industry-funded research.  These experts, who usually develop some COIs, are also involved in authorship of guidelines and NEJM review articles.  This is the model which the NEJM is currently promoting.  For example, in his editorial Dr. Drazen reflected on the virtues of a simpler time in the 1940s when a single investigator could discover and market streptomycin, and then write a major review article on the same topic.


A newer approach might be described as a COI-free model (above), wherein guidelines and NEJM review articles are authored by experts without COIs.  Since investigators are often involved in industry-funded research and frequently have COIs, this would mean that prominent investigators would often be excluded from authoring guidelines and review articles in the NEJM.  As discussed above, this approach requires more work because qualified experts without COIs must be sought.  However, unbiased experts will provide fresh perspectives which add diversity to the field. 

A recent example of these two models was the evolution of the American College of Emergency Physicians (ACEP) clinical policy regarding ischemic stroke.  Initially, a policy was drafted as a joint document with the American Academy of Neurology, including authors with COIs.  The first version was very enthusiastic about the use of TPA (giving it a Level A recommendation within the 0-3 hour window).  This policy, and concerns about COIs, caused an uproar.  ACEP consequently broke away from the American Academy of Neurology and went back to the drawing board to design an entirely new policy authored solely by experts without any COIs.  The new policy is generally felt to be a major improvement compared to the initial policy, with less bias and more focus on the evidence.

Is there a shortage of authors for review articles?

The argument for allowing authors with COIs to write NEJM review articles is based on a reported shortage of eligible authors (as described in the 2002 NEJM policy statement here).  This is hard to believe.  For example, in the USA alone there are >150,000 active full-time faculty employed by medical schools.  Any of these faculty would probably be honored to write a review article for the NEJM, and many thousands of them are qualified.

Conclusions

The recent NEJM campaign in support of industry is partially correct:  COIs are not necessarily evil, and people with COIs include many brilliant researchers and clinicians.  Certainly physicians and pharma need to work together to develop new drugs, with patients often benefitting from such collaboration. 

However, there is no shortage of unbiased experts without COIs to write NEJM review articles and consensus guidelines.  Choosing physicians without COIs for these tasks makes sense.  This would avoid bias or the appearance of bias, thus bolstering trust in these sources.  As a clinician, I would be more interested in a review by an author without COI. 

Evaluating this issue exposes the fact that medical journals have significant COIs.  Journals often receive significant funds from drug companies in direct response to publishing industry-funded research (in the form of bulk reprints).  With the British Medical Journal and the NEJM moving in opposite directions on this issue, further examination of these differences is necessary.

Additional reading
  • No, Phramascolds are not worse than the pervasive conflicts of interest they criticize:  Larry Husten in Forbes
  • Medical journals are an extension of the marketing arm of pharmaceutical companies.  Smith R, PLOS Medicine 2005
Addendum 6/3/2015: Drs. Kassirer and Angell (prior editors of the NEJM referenced above) just published an editorial in the BMJ here.   It is a must-read.

Conflicts of Interest:  None. 

Image Credits: Image of physician obtained from http://www.cliparthut.com/doctor-symbol-clipart.html


Flash cigarette burns: To intubate or not to intubate?

$
0
0

Getting warmed up with a multiple-choice question

A 70-year-old man with oxygen-dependent COPD is admitted following a flash burn.   He started smoking with his oxygen running, and the cigarette “exploded” in his face.  Currently he is in the emergency department on four liters nasal cannula (twice his chronic oxygen prescription).   He is mentating well with a saturation of 93% and a respiratory rate of 15 breaths/minute.  He has first-degree burns on his lips and cheeks, with soot in his nares and singed nasal hairs.   What is the best immediate management for this patient?

(a) Immediate endotracheal intubation.
(b) Laryngoscopy to evaluate for upper airway, intubate if edema or blistering is seen.
(c) Bronchoscopy to evaluate entire airway, intubate if edema or blistering is seen. 
(d) Admit for observation.

Introduction

Education about airway injury in burn patients typically focuses on patients with smoke inhalation injury (e.g. following entrapment in a burning building).  Such patients are forced to inhale heated air, leading to a risk of delayed airway edema with difficult intubation.  Consequently, the approach to airway management in such patients often involves pre-emptive airway examination with intubation if there are signs of airway involvement.

Flash cigarette burns are entirely different.  A flash cigarette burn is used here to refer to when a patient on home oxygen lights up a cigarette, leading to a very exuberant but self-limited combustion of the cigarette in their face.  These fires are brief and self-contained, with primarily superficial damage.  The injury often appears misleadingly severe (i.e. face covered in soot, with singed nasal hairs).  Given a different mechanism of injury compared to other types of burns, the clinical approach should likely be different as well. 

The Evidence

Amani H et al.  Assessing the need for intubation in patients sustaining burn injury secondary to home oxygen therapy.  Journal of Burn Care & Research 2012.

This is a retrospective chart review study of 86 patients with burns associated with home oxygen between 2000-2010.  87% of these patients suffered burns while lighting a cigarette, with other causes including candles, sparks, and gas stoves.  The percent total body surface area involved ranged from 0.5-15%. 

Most patients (61%) were not intubated.  Among intubated patients, bronchoscopy revealed airway edema in 22%.  Most intubations occurred in the field or outside hospital, with only eight patients intubated in the ED of the burn center and one patient intubated in the ICU (for an exacerbation of asthma).  

This study is limited because it evaluates a heterogeneous group of patients (combining flash cigarette burns with more serious burn injuries).  Another limitation is that the indication for intubation in most cases was unclear, so it is unknown whether patients truly required intubation.  

Regardless, a few points are notable.  Most patients didn’t require intubation, and the great majority had no airway edema.   Perhaps more importantly, there was no evidence of delayed airway swelling:  only one patient required intubation in the ICU due to asthma exacerbation.  The authors came to the following conclusions:

“Health care providers with limited or infrequent exposure to the treatment of burn patients with singed facial and nasal hair often interpret these physical findings to be consistent with the presence of a possible inhalation injury.   This often results in unnecessary intubation in a patient who demonstrates no signs of respiratory distress or, as in a patient with COPD, no change in respiratory status from baseline.”

Muehlberger T et al.   Domiciliary oxygen and smoking:  an explosive combination.   Burns 1998.

This is a retrospective chart review of 21 patients with burns due to lighting a cigarette on oxygen therapy between 1990-1997 at John Hopkins Hospital.  Seventeen patients were using oxygen via nasal cannula, with four patients using a facemask.  Seventeen patients had second-degree burns, four patients had full-thickness burns, and two patients required skin grafting.  Nonetheless, no patients had an inhalational injury or required intubation.

Patient image from Muehlberger et al.   

This is a useful study because it examines only patients with flash cigarette burns.  When managed at a referral center with extensive experience treating burns, none of these patients required intubation. 

Vercruysse GA et al.  A rationale for significant cost savings in patients suffering home oxygen burns:  Despite many comorbid conditions, only modest care is necessary.  Journal of Burn Care & Research 2012.

This is a retrospective study of 64 patients admitted with burns sustained while using home oxygen therapy between 1997-2010.  92% of burns were due to cigarettes.  Intubation predominantly occurred prior to transfer to the burn center, with 28% of transferred patients arriving intubated.  An additional two patients were intubated in the emergency department prior to evaluation by the burn service.   Among all intubated patients, 80% were extubated within eight hours of admission and 100% were extubated within 24 hours of admission. 

This is an interesting study.  Given that most patients were extubated very rapidly, it is unlikely that they truly required intubation.  Furthermore, for a patient intubated pre-emptively, this data suggests that it is safe to pursue rapid extubation.  

Answering to the introductory question 

Choice (D) may be best (observation).  For patients with severe smoke inhalation injury (e.g. due to being trapped in a burning building), there is a risk of delayed airway edema with subsequent airway crisis.  Therefore, an aggressive approach to the airway is typically recommended with airway inspection and pre-emptive intubation if there is evidence of airway edema or blistering.  However, patients with flash cigarette burns do not appear to develop delayed airway edema.  Therefore, there is no indication for airway inspection or pre-emptive intubation.  

Conclusions 

Flash burns due to rapid combustion of a cigarette (sometimes with ignition of the patient’s nasal cannula as well) are typically relatively benign.  Skin grafting is only rarely required, with topical care usually being sufficient for management of the burn.  The rate of airway edema is low, and there does not appear to be a risk of delayed airway swelling or airway loss.

Pre-emptive intubation of these patients is not indicated.   Although these patients invariably have singed nasal hairs and soot in their nares, this is not an indication for intubation.  Airway management should be approached in these patients as it would be in other patients with chronic respiratory failure, with intubation only if clinically warranted (e.g. due to acute respiratory failure).   If the patient has already been intubated prophylactically, evidence supports aggressively weaning and extubating these patients.  

More on the anxiety-COPD vortex of badness here.
  
Most patients on home oxygen therapy have COPD, so a flash fire may cause bronchospasm with exacerbation of the patient’s lung disease.  Aggressive management with bronchodilators and perhaps low-dose corticosteroids may be helpful with this (e.g. prednisone 40 mg PO for five days).  Patients often have pain and anxiety related to their burns, which may cause tachypnea with worsening of gas trapping thereby aggravating their dyspnea (figure above).  Cautious use of opioids can be helpful to alleviate pain and anxiety.  Although facial burns will typically prevent application of noninvasive ventilation, the use of high-flow nasal cannula may be considered in selected patients with elevated work of breathing who do not require intubation (with very careful observation).


Overall, these patients may be approached with a focus on serial clinical assessment and common sense.  Surgical consultation is important to determine the need for skin grafting or other burn management.  From an airway and pulmonary standpoint, these patients should likely be approached similarly to other patients with chronic lung disease and respiratory dysfunction.  All efforts should be made to treat the lung disease, with intubation only if clinically warranted. 




  • Patients who have limited facial burns following a flash burn (from rapid combustion of a cigarette) typically do well with conservative therapy.  Skin grafting or intubation are only rarely required.
  • There is no role for pre-emptive intubation or routine airway examination for a patient with a limited flash burn.  If the patient has already been intubated pre-emptively, they should be aggressively weaned and extubated. 
  • Patients with a COPD exacerbation following a flash burn may be managed similarly to other patients with COPD exacerbation.   Attentive pain control will often go a long ways towards making these patients feel and look better.



Hypocaloric Nutrition: Theory, Evidence, Nuts, and Bolts

$
0
0

Introduction

Until recently there has been little evidence regarding the caloric target for feeding critically ill patients.  In the absence of evidence, it has been assumed that we should aim to meet 100% of predicted energy needs.  New multicenter RCTs challenge this dogma, particularly the PERMIT trial by Arabi et al.

Theory supporting hypocaloric nutrition

The nutrition paradox

Critically ill patients often don't have a good appetite, especially patients with sepsis.  Patients with severe illness on a hospital diet often consume well below the recommended number of calories.  This usually goes unnoticed.  However, once a patient is intubated, enteral nutrition is initiated and it rapidly becomes obvious whether or not the patient can tolerate full caloric intake.  If they cannot, it becomes a source of enormous consternation. 

This is paradoxical for two reasons.  First, if receiving 100% full caloric intake is essential, then this should be equally important before the patient is intubated.  However, we intuitively feel that force-feeding a septic patient with no appetite is a bad idea.  Second, there is considerable confusion regarding exactly how many calories critically ill patients burn (e.g., conflicting equations to predict caloric use), and what percentage of these calories we should replace.  Consequently, when we target 100% caloric repletion, it is unclear whether we are chasing the right target.

Nutrition may not prevent muscle breakdown


In the acute phase of critical illness, systemic inflammation induces a catabolic state with breakdown of the patient's muscle protein.  Ideally, administration of adequate nutrition would prevent this process entirely.  However, muscle breakdown is a complex process driven by inflammation as well as malnutrition and disuse, which does not respond completely to nutritional supplementation.  Beyond a certain point, aggressive nutritional support may promote fat gain instead of preventing muscle loss (Schetz 2013).

Autophagy may be a good thing in moderation.


Autophagy is a process wherein cells under stress digest and recycle organelles and proteins.  This process is stimulated by starvation, and suppressed by feeding or insulin.  Animal models suggest that autophagy could be beneficial in acute lung injury as well as septic shock (Mizumura 2012).  It is possible that provision of excessive nutrition and insulin could inadvertently suppress autophagy with harmful consequences. 

Landmark papers about hypocaloric nutrition

ARDS-NET investigators.  Initial trophic vs. full enteral feeding in patients with acute lung injury: the EDEN randomized trial.  JAMA 2012.

This is a prospective multicenter RCT of patients intubated for acute lung injury comparing full enteral feeding to lower-volume trophic feeding for six days (1).  After six days, all patients received full enteral nutrition.  Patients randomized to trophic feeds received 20 kCal/hour, equal to about 25% of the estimated daily caloric goal.  One thousand adults were recruited.


There was no difference in mortality, ventilator-free days, infection, or other organ failures.  Patients in the trophic feeding group experienced less regurgitation (0.4% vs. 0.7%; p=0.003), less vomiting (1.7% vs. 2.25; p=0.05), and on average two liters lower fluid balance.  As shown below, patient in the trophic feeding group achieved superior glycemic control despite receiving less insulin.  Note that after one week, insulin requirements decreased in the full feeding group, possibly reflecting a decrease in systemic inflammation and insulin resistance (more on this below).


Overall this study demonstrated that among patients with acute lung injury (mostly due to sepsis or pneumonia) a short period of underfeeding did not impact mortality or major organ function.  As might be expected, lower nutritional targets improved gastrointestinal tolerance and glycemic control.  This supports the practice of temporarily providing very low-level enteral nutrition if there are obstacles to providing a greater degree of nutritional support. 

Arabi YM et al.  Permissive underfeeding or standard enteral feeding in critically ill Adults (the PERMIT trial), NEJM 2015.

This is a prospective multicenter RCT comparing provision of 40-60% of estimated caloric requirements versus 70-100% of estimated requirements, with all patients receiving the same protein intake.  894 critically ill patients with medical, surgical, or trauma admission were included, of whom 97% were intubated.



The study was well executed, with clear separation between the two groups (panel A above).  The primary outcome was mortality at 90 days, which was 27.2% in the hypocaloric group vs. 28.9% in the full nutrition group (p=0.58).  Similar to the EDEN trial, patients in the hypocaloric nutrition group achieved better glycemic control despite requiring less insulin, and had a slightly lower fluid balance (panels C-E above).  There were no differences in ventilator-free days or overall severity of illness (panel F above).  

Hypocaloric nutrition caused a slight increase in endogenous protein loss at 7 days with no difference at 14 days (as measured by nitrogen balance; panel I above).  This supports the concept that above a certain threshold, additional caloric intake doesn't strongly affect breakdown of muscle proteins. 

Although the benefits of hypocaloric nutrition shown in this study are debatable, the study provides evidence that administration of 50% predicted caloric needs is safe for two weeks.  However, it must be noted that the investigators used a specifically designed formulation to target providing 100% of protein requirements using protein supplements.

Limitations of both EDEN and the PERMIT trial

Although these are both well-performed prospective RCTs, they do share some limitations in common.  Both studies excluded patients with pre-existing malnutrition, severe shock, or burns.  EDEN also excluded patients with neuromuscular disease, severe chronic respiratory failure, or obesity, with PERMIT excluding patients with pregnancy.  Thus, these findings may not apply to all patients, especially patients with pre-existing debilitation or unusually high metabolic demands.

Another limitation of these studies was that they were performed in research centers with extremely close attention to the number of calories the patient was receiving.  Even in this setting, patients received less than target caloric intake (e.g. in the PERMIT trial, the "100% nutrition" group only received 70% of the calorie goal).  In real-world settings, interruptions in tube feeding would likely be a greater problem, potentially leading to a risk of substantial under-feeding.  Therefore, if hypocaloric nutrition is performed, special attention is required to the number of calories the patient is actually receiving.

Nuts & bolts of providing hypocaloric enteral nutrition

Some early studies showed an increased risk of infection with hypocaloric nutrition.  However, upon closer examination this was linked to administration of lower amounts of protein, rather than lower numbers of calories (Tian 2015).  Therefore, when providing hypocaloric nutrition it appears important to provide 100% of the daily requirement of protein (Weijs 2013).  This cannot be achieved by simply cutting the rate of tube feeds in half. 

If a nutritionist is not immediately available, the following approach may be used with most patients (excluding, for example, patients with renal failure or morbid obesity).  This approach is not completely precise.  However, since our nutritional targets are rough estimates, the entire concept of precision may be moot.  In a busy ICU, complex equations are often a barrier to implementing an evidence-based nutritional strategy at the bedside.  The approach used here is designed to be a fast and easy way to obtain a reasonablenutritional prescription.

First, a type of tube feed should be selected.  This gets confusing because several dozen tube feed formulations exist from a variety of brands.  Below is a classification of common tube feeds.  For patients with high residuals or emesis, a more concentrated formulation may be useful.

Rough classification of tube feed formulations
  • 1 kCal/kg, low-protein (~0.04 grams/ml)
    • Osmolite 1-cal
    • Peptamen
    • Nutren 1.0
  • 1 kCal/kg, high-protein (~0.065 grams/ml)
    • Promote, Promote with fiber
    • Replete, Replete with fiber
    • Peptamen VHP
  • 1.5 kCal/kg concentrated (~0.065 grams/ml)
    • Isosource 1.5
    • Nutren 1.5
    • Peptamen 1.5
    • Osmolite 1.5
    • Jevity 1.5
    • Respalor 1.5
  • 2 kCal/kg concentrated (~0.08 grams/ml)
    • TwoCal HN
    • Nutren 2.0
    • NutriRenal 2.0
    • NovaSource Renal

The table below provides nutritional prescriptions based on gender, height, and tube feed formulation.  The resulting prescription is a rateof the tube feed along with an additional amount of pure protein supplementation (available in different hospitals as either scoops of protein powder or packets of protein paste).  This table is based on approximating the caloric requirements as 25 kCal/kg/day and the protein requirement in critical illness as 1.5 grams/kg/day, both using the ideal body weight (2). 


This table looks busy, but it's easy to use.  For example, suppose that we wanted to provide hypocaloric nutrition to a man with height 68 inches using Nutren 1.5.  As shown below, this can be provided using a rate of 15 ml/hour plus 78 grams of supplemental protein per day.


Discussion

For decades it has been dogmatically accepted that nutritional support must provide 100% of the estimated caloric requirement at all times.  Although this may seem to be physiologic, it is not the body's natural response to inflammation.  Normally inflammation causes a reduction in appetite with negative caloric balance and weight loss.  Although this is not sustainable chronically, it is possible that having a negative caloric balance temporarily during acute illness could be beneficial (e.g. due to stimulation of autophagy and avoidance of aspiration).

The ideal caloric intake during acute illness remains unclear.  The EDEN trial shows that it is safe to provide 25% of the caloric goal for five days.  The PERMIT trial shows that targeting 50% of the caloric goal for two weeks was similarly safe.  Although neither trial showed improved mortality, there were some signals of benefit from hypocaloric nutrition (improved gastrointestinal tolerance, improved glycemic control, and more negative fluid balance).

It is possible to imagine that the ideal caloric administration could be dynamic over time (figure below).  Initially when the patient is severely ill, it might be unwise or difficult to provide 100% of the estimated caloric requirement.  Over time, as the patient recovers, the amount of nutrition could be increased.  Acute illness involves characteristic evolution in hemodynamic, endocrine, and fluid shifts so it makes sense that nutritional requirements would be dynamic as well.


This evidence may not be strong enough to indicate that hypocaloric nutrition should be used for most ICU patients.  However, hypocaloric nutrition may be a reasonable strategy when managing an acutely ill patient with difficulty tolerating tube feeds (e.g. due to emesis and distension).  It is possible that the patient may simply not be ready to tolerate 100% caloric nutrition, so attempts to force this intake (e.g. with prokinetic agents) may be ill-conceived.  Rather than continuing to chase a target 100% caloric provision, it may be safer and more successful to temporarily target 50% caloric provision with 100% protein administration.  This could reduce the likelihood of distension, vomiting, or complete failure of enteral nutrition (with transition to parenteral nutrition). 



  • Nutrition has a variety of effects on the endocrine and immune systems.  Clinical evidence is required to determine the ideal nutritional target during acute illness, rather than assuming that 100% nutritional provision is ideal all the time. 
  • The PERMIT trial provides evidence that hypocaloric nutrition is safe among most acutely ill ICU patients for limited periods of time (e.g. 50% calorie provision for two weeks with administration of 100% of protein requirements). 
  • Currently it is unclear whether hypocaloric or full nutrition is superior upon admission to the ICU.  The ideal nutritional strategy likely varies between patients based on several variables (e.g. pre-existing malnutrition, difficulty tolerating feeds). 
  • Hypocaloric nutrition may be a reasonable short-term approach for many patients who are having difficulty tolerating 100% caloric administration.
  • For most ICU patients (e.g. without morbid obesity or renal failure), the following table may be used to quickly estimate a prescription for enteral nutrition which provides 100% of estimated protein requirements despite varying levels of calories. 


Same figure on its side (may be easier to read with a smartphone):

Additional reading
  • Schetz M et al.  Does artificial nutrition improve outcome of critical illness?  Critical Care 2013.
  • Wischmeyer PE.  The evolution of nutrition in critical care: how much, how soon?  Critical Care 2013.

Notes

(1) The term "trophic" feeding refers to very low levels of enteric feeding intended to prevent atrophy of the gut border.  This may also be referred to as "trickle" feeding.

(2) Note that 1.5 grams/kg/day protein and 25 kCal/kg are consistent with both ASPEN and ESPEN Guidelines (American & European nutritional societies)(Weijs 2013).  There seems to be a bit more consensus about protein requirements, with the 1.5 g/kg/day figure consistent with recommendations and most articles on the topic.  There are a wider variety of equations and methods used for determining total energy requirement. 


Proposal: Early ventilator weaning to HFNC in hypoxemic respiratory failure

$
0
0
 

Case example

A previously healthy 45-year-old man was transferred to the Genius General Hospital ICU for management of pneumonia.  He was intubated prior to transfer due to hypoxemia (details unavailable).  His chest radiograph showed dense right lower lobe consolidation, which was confirmed with ultrasonography.  He was treated with a regimen of dexamethasone, ceftriaxone, and azithromycin as discussed last week. 

On the second hospital day his oxygenation requirements were stable, requiring 55% FiO2 on 5 cm PEEP.  His chest radiograph showed stable consolidation of the right lower lobe without any progression of the pneumonia.  He was comfortable, calm, and awake on the ventilator.  

Based on the ICU weaning protocol, this patient did not qualify to undergo a spontaneous breathing trial because he was requiring >50% FiO2.  Nonetheless, a spontaneous breathing trial was performed on 55% FiO2.  He tolerated this well and was extubated to high-flow nasal cannula (HFNC) set to 60% FiO2 and 50 liters/minute flow.  Over the next few days HFNC was gradually weaned off.

Introduction

There is little data regarding how well a patient must oxygenate before considering extubation.  The most recent guidelines recommend delaying extubation until the patient can tolerate 40% FiO2 (Boles 2007).  Nonetheless, many modern weaning protocols and literature allow for extubation on 50% FiO2 (e.g. UPenn protocol).  With the availability of high-flow nasal cannula it might be possible to consider extubation in very carefully selected patients requiring >50% FiO2. 

Intubation-extubation paradox:  Failure of rigid extubation criteria.

Imagine what would happen if guidelines requiring FiO2 40% before extubation were followed for a patient with a baseline oxygen saturation 85-90% on six liters nasal cannula (which generates an inhaled oxygen concentration of ~44% FiO2).  If this patient were intubated for any reason (e.g. seizures, elective surgery), it would be impossible to ever reach extubation criteria!  Of course, in reality, if nothing changed with the patient's respiratory system, it should be possible to extubate the patient back to their chronic home oxygen prescription.  This illustrates that extubation criteria should not be applied rigidly, but instead may require adaptation to the clinical scenario. 

HFNC provides administration of higher levels of FiO2 than previously practical following extubation

Prior to HFNC, the highest fraction of inhaled oxygen (FiO2) that could be provided after extubation for extended periods of time was around 50% via a Venturi mask.  A non-rebreather facemask can increase the FiO2 to 60-70%, but this may vary depending on the respiratory pattern and mask seal.  Noninvasive ventilation can provide 100% FiO2, but generally cannot be tolerated for more than a couple days continuously.  Using these devices, it would be difficult to extubate a patient requiring >50% FiO2.

HFNC changes this by allowing longer-term powerful support of oxygenation (e.g. >90% FiO2 and ~5 cm PEEP).  This might allow extubation of a patient requiring >50% FiO2 while still leaving a margin of error, in case the patient's oxygenation deteriorates.  HFNC is more comfortable than BiPAP, with some patients remaining on it continuously for days to weeks.  

Hypoxemia alone is usually not the cause of post-extubation respiratory failure

Provided that the underlying lung disease is stable or improving, isolated hypoxemia rarely causes re-intubation.  The patient's oxygen requirements may be estimated reasonably well prior to extubation based on FiO2 and PEEP on the ventilator.  The clinician can subsequently provide a similar or slightly higher FiO2 after extubation.  Overall this may explain why most studies show no relationship between oxygenation and risk of re-intubation (Thille 2013).  (One notable exception to this is cardiogenic pulmonary edema, which may be exacerbated by the withdrawal of positive intrathoracic pressure.)  

The most common causes of post-extubation respiratory failure are respiratory muscle fatigue and inability to clear secretions from the airway.  Respiratory muscle fatigue may be predicted on the basis of the spontaneous breathing trial (e.g. rapid-to-shallow breathing index, tidal volume, respiratory rate, required minute ventilation).  Ability to clear secretions from the airway is harder to assess, but may be predicted on the basis of multiple factors (cough strength, sputum volume, suctioning frequency, and mental status).  Although failure from these causes is often associated with hypoxemia, the hypoxemia is a secondary problem (e.g. due to mucus plugging or atelectasis).  

Symmetric nature of intubation and extubation


Although we may conceptualize intubation and extubation separately, they are two sides of the same coin.  For example, readiness for extubation is determined largely based on a rapid-shallow breathing index during a spontaneous breathing trial (respiratory rate divided by tidal volume).  Alternatively, for a patient with respiratory failure approaching intubation, one of the most helpful vital signs is a worsening respiratory rate.  In both cases, a high respiratory rate usually reflects an imbalance between diaphragmatic strength and respiratory workload, signaling impending diaphragmatic failure.  

Similarly, the same techniques that may help avoid intubation can be used to facilitate early extubation.  The classic example would be BiPAP in the setting of COPD.  Early application of BiPAP in COPD reduces the intubation rate.  Selected COPD patients who require intubation can be aggressively extubated after 48 hours directly to BiPAP support, even if they do not meet the traditional criteria for extubation (Nava 1998).  Thus, BiPAP can be similarly helpful for either avoiding intubation or facilitating extubation (discussed further by the LITFL blog here). 

HFNC was shown in the FLORALI study to reduce the intubation rate in hypoxemic respiratory failure.  It may be imagined that HFNC provides a greater level of respiratory support than low-flow oxygen.  Thus, for some patients with moderate severity illness, traditional oxygen support would be insufficient, leading to intubation (black arrow below).  However, HFNC provides sufficient support to avoid intubation entirely:


As shown below, a patient with more severe disease may require intubation even with HFNC support.  However, HFNC might still facilitate early extubation to a greater level of noninvasive respiratory support:



Evidence regarding the use of HFNC following extubation

Evidence up until August 2014 was previously explored in this post.  The most notable study was Maggoire 2014, which randomized 105 hypoxemic patients following extubation to HFNC vs. oxygen via Venturi mask.  Patients randomized to HFNC experienced less tachypnea, lower PaCO2, and lower reintubation rates (4% vs. 21%, p = 0.01).

Since then, Stephan 2015 explored the use of HFNC in patients extubated following cardiothoracic surgery.  BiPAP has been shown to reduce reintubation in this setting, so this trial was designed involving 830 patients randomized to BiPAP vs. HFNC.  The two groups had nearly identical rates of reintubation.  It is unclear exactly how this extrapolates to non-surgical patients, but overall the study supports post-extubation HFNC.  

There is no evidence regarding extubation of a patient requiring >50% FiO2 to HFNC.  Thus, this post is merely a proposal. 

Conclusions

There is little evidence regarding how well a patient must be able to oxygenate before liberation from mechanical ventilation.  Most studies do not find a strong relationship between oxygenation and the risk of extubation failure. 

HFNC has the capacity to provide high levels of oxygen support in a comfortable and stable fashion.  This could provide a safety net, possibly allowing us to extubate patients who are a bit more hypoxemic than we might otherwise feel comfortable with.  However, until evidence is available, this must be considered very cautiously in highly selected patients.  A potential candidate might have purely hypoxemic respiratory failure with improving underlying disease and all other indicators strongly predictive of a successful extubation (e.g. excellent performance on spontaneous breathing trial, strong cough, normal mental status, no evidence of volume overload).


Image Credits: https://en.wikipedia.org/wiki/Ehime_Maru_and_USS_Greeneville_collision#/media/File:USSPittsburghBallastBlow_small.jpg

The tale of six blind physicians and the elephant

$
0
0

An elderly man was admitted to the ICU and evaluated by six blind physicians.

The blind cardiologist noted that the patient had a malignant pericardial effusion with tamponade.  She recommended an immediate pericardial drain followed by intra-pericardial chemotherapy.

The blind oncologist noted that the patient had stage IV lung cancer.  He recommended palliative chemotherapy and whole-brain radiation.

The blind neurologist noted that the patient had depressed mental status due to brain metastases with elevated intracranial pressure and impending herniation.  She recommended initiation of dexamethasone and hypertonic saline, with hourly neurologic examinations. 

The blind hematologist noted that the patient had a deep vein thrombosis, likely due to malignancy.  He recommended initiation of a heparin infusion that could be stopped if there were hemorrhage into the pericardium or brain. 

The blind nephrologist noted that the patient had renal failure with a blood pressure insufficient to tolerate hemodialysis.  He recommended placement of a dialysis catheter with initiation of continuous renal replacement therapy.

The blind intensivist noted that the patient was in shock and not protecting his airway.  She prepared for immediate intubation and vasopressor support.

A nurse with the power of sight walked by and was surprised to see a dying man being harassed by six physicians.  She walked into the room and gently said, "It seems to me that there is an elephant in this room.  This man is dying and nothing can stop that."  The physicians were taken aback, but recognized the truth of her words. 

Comment

Critical care medicine is typically reductionist, analytical, and data-driven.  Each patient is dissected into organ systems and problems.  Enormous amounts of data are involved - vital signs, labs, EKGs, CT scans, medication lists, microbiology, and pathology.  For each problem there is a solution, for each abnormality there is a correction, for each symptom there is a differential diagnosis. 

This approach works well when providing curative therapy.  The problems are tackled, the details are attended to, and the patient gets better.  For a young patient with septic shock, it doesn't really matter what their hopes and dreams are.  What matters is immediate, definitive, technically proficient care that saves lives.  I've provided curative care to many such patients without understanding who they were as people.  That's OK.  Not ideal, but OK. 

However, occasionally this approach is applied to patients with unsolvable problems who are in need of palliation.  In this situation, things spiral out of control.  Problems multiply, rather than being resolved.  The goal of caring for the patient is rapidly buried beneath piles of data and technical details.  Treatments lead to complications, as we become increasingly lost.  The harder we try to make the patient better, the sicker the patient becomes. 

Determining which patients benefit from a curative approach, a palliative approach, or an intermediate approach is one of the most confounding aspects of critical care.  There are few easy answers, but rather we are often left blindly groping for insight in the darkness.  Engaging those around us can help - patients, families, friends, social workers, nurses, ethicists.  Alone we may be blind, but together insight is possible. 

*****

Related news:  No benefit from chemotherapy at end of life, by Charles Bankhead, Medpage Today. 

Don't worry, next week we will return to a coldly reductionist, analytical approach with a two-week series about dominating hyponatremia. 


Emergent treatment of hyponatremia or elevated ICP with bicarb ampules

$
0
0
Introductory case

A young 70-kg man was transferred to the Genius GeneralICU for management of stupor.  He had been diagnosed with aortic valve endocarditis due to heroin abuse two weeks earlier, but left the hospital against medical advice.  Shortly after admission to Genius General, the lab called with a critical sodium value of 122 mM.  Review of records from the outside hospital showed that his sodium had been 124 mM a few hours earlier.  So, his hyponatremia was real and it was falling rapidly.  He was immediately treated with two 50-ml ampules of 8.4% sodium bicarbonate.  Repeat sodium showed an increase to 125 mM: 


Over time his mental status and sodium both normalized.

Introduction
"Prompt infusion of hypertonic saline may save lives and preparing a 3% hypertonic saline infusion takes time.  In addition, errors may occur from having to calculate the required amount of sodium chloride in an emergency."
 - European hyponatremia guidelines 2014
When hypertonic therapy is needed, it is often needed immediately.  A patient with herniation or hyponatremic seizures needs hypertonic therapy now, not in ten minutes when it arrives from pharmacy.  For example, I once ordered a head CT and hypertonic saline for a patient with suspected herniation, but the hypertonic saline arrived in the ICU after the patient had already left for CT scan.  Hypertonic sodium bicarbonate may provide a solution to this logistical problem.

Understanding 8.4% sodium bicarbonate

The osmolarity of 8.4% bicarbonate is 2000 mOsm/liter, which would be equivalent to the osmolarity of 5.8% NaCl.  Thus, 8.4% bicarbonate for osmotherapy may be conceptualized as "6% saline."  This makes it twice as powerful as the traditional hypertonic agent, 3% NaCl.  For example, instead of bolusing with 100 ml of 3% NaCl, you could bolus with 50ml of 8.4% bicarbonate (one ampule).

Many trainees feel comfortable bolusing two amps of bicarb, but would be afraid to bolus 200 ml of 3% NaCl.  This is illogical, since both therapies provide essentially the same amount of osmotherapy.  Similar irregularities exist in the literature as well.

Dose for symptomatic hyponatremia

Guidelines and review articles agree that for a patient with hyponatremia and severe symptoms (e.g. seizures or coma), an increase of sodium by 5 mM should be adequate to relieve symptoms and avoid danger.  However, the amount of hypertonic therapy recommended in many sources is inadequate.  For example, a 2015 review article in the New England Journal contains the following table: 


Although 100 ml of 3% saline may seem like a lot, it isn't.  For example, if 100ml of 3% NaCl were given to the patient at the beginning of this post, it would increase his sodium by 0.9 mM according to the Androgue-Madias formula (MedCalc). 


The European 2014 guidelines recommend treating emergent hyponatremia as shown above.  Note that they recommend checking a sodium level simultaneously with the seconddose of 3% NaCl, such that every patient would receive a minimum of 4 ml/kg 3% NaCl.  Based on the Androgue-Madias equation, 4 ml/kg 3% NaCl should increase the serum sodium by ~3 mM.  The guidelines then recommend additional hypertonic therapy until the sodium increases by 5 mM/L.  This is more aggressive than most articles recommend (e.g. a 70-kg patient would receive a minimum of 280 ml 3% NaCl).

In practice, it is logistically tricky to provide two doses of 2 ml/kg 3% NaCl and measure the sodium simultaneously with infusion of the second dose (especially if other events are occurring, such as status epilepticus).  It may be simpler to provide the patient with a single dose of 4 ml/kg 3% NaCl and check the sodium afterwards.

Providing a larger dose (i.e. 4 ml/kg 3% NaCl) before repeating a sodium level will allow more precise determination of the effect of the bolus.  For example, suppose you give 100ml of 3% NaCl and the sodium increases from 115 mM/L to 116 mM/L.  Due to rounding error, a lab value of "115" mM/L could represent anything between 114.5-115.4 mM/L and "116" mM/L could represent anything from 115.5-116.4 mM/L.  Therefore, an increase from "115" mM/L to "116" mM/L could represent an increase of anywhere between 0.1 - 1.8 mM/L.  This can be misleading. 

Converting 4 ml/kg 3% NaCl to sodium bicarbonate would suggest that the initial dose should be 2 ml/kg of 8.4% sodium bicarbonate (e.g. a 70-kg patient would receive 140 ml of bicarbonate, or nearly three 50-ml ampules).  For slightly less dire situations, this could be rounded down to 100 ml (two 50-ml ampules; equivalent to 200ml 3% saline). 

Evidence regarding the increase in sodium following 8.4% sodium bicarbonate

Gutierrez 1991 studied the effect of 1 ml/kg boluses of 8.4% sodium bicarbonate among eight patients with renal failure and hyperkalemia.  On average, this increased the serum sodium by 1 mM/L.

Kim 1996 studied the effect of 120 ml 8.4% bicarbonate among eight patients with end-stage renal disease with weight ranging from 55-65 kg.  On average, this ~2 ml/kg dose caused sodium to increase by 2 mM/L. 

Bourdeaux 2010 studied the effect of 85ml boluses of 8.4% sodium bicarbonate among ten episodes of elevated intracranial pressure.  On average this increased the sodium by 1.6 mM/L.  Given that 85ml is probably a bit over 1 ml/kg for most patients in this study, this study would suggest that a 2 ml/kg bolus of 8.4% sodium bicarbonate should increase the sodium by about 2-3 mM/L.

Overall, these data support the concept that a 2 ml/kg bolus of 8.4% sodium bicarbonate would increase the serum sodium by around 2-3 mM.  This would be a reasonable first step for the management of severe symptomatic hyponatremia. 

Dose for ICP elevation

Recent trends in neurocritical care are moving away from mannitol and towards hypertonic saline for osmotherapy of elevated intracranial pressure (ICP).  Hypertonic saline might be more effective.  Furthermore, hypertonic saline doesn't cause diuresis and is more straightforward to monitor (serum sodium is easier to interpret and trend than serum osmolarity).

The ideal dose of hypertonic saline is unclear, with substantial variation between various studies.  Typically sequential boluses are used with titration to effect.  Below are commonly used doses of 3%, 7.5%, and 23.4% NaCl (Ropper 2012, Bourdeaux 2010, Ennis 2011).  To facilitate comparison, these doses have been converted into equi-osmolar doses of 3% NaCl and 8.4% sodium bicarbonate. 


This data would suggest that 80-120 ml of 8.4% bicarbonate may be a reasonable dose for management of intracranial hypertension (i.e. about two 50-ml ampules). 

Evidence regarding 8.4% sodium bicarbonate use in elevated intracranial pressure 

Bourdeaux 2010 performed a prospective observational study of the effect of 85ml sodium bicarbonate over 30 minutes during ten episodes of elevated ICP among seven patients with traumatic brain injury.  The average ICP fell from 28 mm to 10 mm (figure below).  There was no statistically significant change in pH.


Bourdeaux 2011 performed a RCT comparing 100ml of 5% NaCl vs. 85ml of 8.4% sodium bicarbonate (equimolar doses).  Twenty episodes of elevated ICP were studied among eleven patients with traumatic brain injury.  There was no difference in the fall in ICP during the first 60 minutes following either treatment.  However, after 150 minutes the mean ICP was higher in the hypertonic saline group, with two patients in the saline group needing repeat dosing of hypertonic therapy (figure below).



Safety & Contraindications

Safety of sodium bicarbonate

8.4% sodium bicarbonate is a familiar drug which is reasonably safe.  Ideally it should be given via a central vein, but in emergencies it is frequently given via a peripheral vein.  Although previously thought to reduce potassium, hypertonic bicarbonate has little effect on potassium (explored previously here). 

Bicarbonate does have an alkalinizing effect.  For example, a dose of 2 ml/kg 8.4% sodium bicarbonate may increase the serum bicarbonate concentration by ~5 mM (Kim 1996, Kim 1997).  For most patients this will leave the serum bicarbonate well within a safe range.  Patients receiving repeated therapy with hypertonic saline often develop a dilutional non-anion gap metabolic acidosis, so the intermittent use of hypertonic bicarbonate could be helpful to correct this.  In a patient with significant metabolic or respiratory alkalosis, bicarbonate would be contraindicated. 

Over-correction of hyponatremia

Over-correction of hyponatremia is common, but this is rarely a direct effect of the infused solution.  Instead, over-correction is usually due to excessive excretion of free water by the kidneys.  The physiology, prevention, and management of sodium over-correction will be explored in detail next week.




  • 8.4% sodium bicarbonate has about the same osmolarity as 6% NaCl, making it about twice as powerful as 3% NaCl.
  • For severe symptomatic hyponatremia (e.g. seizures or coma), initial treatment with 2 ml/kg of 8.4% sodium bicarbonate is reasonable.  For less dire indications, ~1.5 ml/kg of 8.4% sodium bicarbonate may be used initially (which will often be about 100ml, or two 50-ml ampules).
  • For elevated intracranial pressure, 80-120 ml of 8.4% sodium bicarbonate is a reasonable initial dose (e.g. two 50-ml ampules).  



Key hyponatremia reference:  In 2014 an epic evidence-based guideline on hyponatremia was produced by a consortium of the European Society of Intensive Care Medicine (ESICM), the European Society of Endocrinology, and the European Renal Association.  Free full-text here.

Stay tuned, this is the first of a three-part series about hyponatremia.  Next week we will discuss preventing over-correction of the sodium.  The third post will discuss extremely unconventional approaches to hyponatremia. 


Image Credits: https://en.wikipedia.org/wiki/Baking_powder

Taking control of severe hyponatremia with DDAVP

$
0
0

Introduction with a case

Imagine an elderly patient presenting with hypovolemic hyponatremia (sodium of 115 mM) and moderate confusion.  How would you treat this patient?

The typical approach might be a slow infusion of 3% sodium chloride.  The presence of neurologic symptoms supports the use of hypertonic saline.  However, patients with hypovolemic hyponatremia are at high risk for over-correcting their sodium.  A common compromise between these two concerns would be to use hypertonic saline, but at a low infusion rate.

This approach has two seemingly contradictory flaws (figure below).  First, it is initially too conservative.  Moderately symptomatic hyponatremia is potentially dangerous, especially if the sodium should continue to fall.  For example, European  guidelines recommend a single bolus of 2 ml/kg 3% saline, perhaps enough to increase the sodium by 1-2 mM/L.  Second, slow initial therapy will still leave the patient at high risk of overcorrection (explained further below). 


This post explores an alternative approach which may allow for more aggressive initial treatment while simultaneously avoiding subsequent over-correction. 

Physiology of sodium over-correction

The Adrogue-Madias equation is typically used to predict the change in sodium in response to an IV fluid (e.g. it is built into MDCalc).  This is a simple formula based on taking a weighted average of the sodium concentration of the infused fluid with the sodium concentration of the total body fluid.  The same principles could be used to determine the final sodium concentration if two solutions with different sodium concentration were mixed in a laboratory:


The Adrogue-Madias formula works well for predicting immediate changes in sodium concentration (e.g. bolusing fluid).  The weakness of the formula is that it doesn't take the kidneys into account.  Thus, over time the Adrogue-Madias formula loses predictive ability, because it is often unpredictable how the kidneys are going to handle water. 


Over-correction of hypovolemic hyponatremia is a common example of failure of the Adrogue-Madias formula.  The physiology of hypovolemic hyponatremia is shown below.  In response to cerebral hypoperfusion, the brain secretes vasopressin (a.k.a. anti-diuretic hormone).  Vasopressin has vasopressor effects and also causes retention of free water by the kidneys, both in efforts to support perfusion.  Free water retention causes hyponatremia.  This is not a "mistake," but rather evolutionary wisdom favoring perfusion over normonatremia. 


If a patient with hypovolemic hyponatremia is volume resuscitated, at a certain point perfusion improves and this shuts off vasopressin (figures below).  Without vasopressin, the kidneys rapidly excrete water, causing a dangerously fast normalization of the serum sodium. 




Although this example focuses on hypovolemic hyponatremia, overcorrection will also occur after treatment of any reversible cause of hyponatremia (e.g. psychogenic polydipsia, drug-induced hyponatremia, etc.).

Managing sodium over-correction: DDAVP vs. D5W

There are two treatments to managing water over-excretion.  One treatment is to attempt to replace free water excreted from the kidney, for example with intravenous 5% dextrose (D5W).  This requires careful attention to urine output and serum sodium, with ongoing titration of the D5W.  Wrestling with normal kidneys is difficult.  Usually at some point something exciting happens in the ICU, attention is diverted, and before you know it the sodium is too high.  High rates of D5W may induce hyperglycemia.  Others have reported difficulty with this strategy (Perianayagam 2008; Gharaibeh 2015).


A more powerful approach to excessive water excretion is to provide desmopressin (DDAVP, 2 micrograms IV q8hr;  Sood 2013).  DDAVP stimulates the V2-vasopressin receptors in the kidney, causing renal retention of water (figure above).  This eliminates unpredictable excretion of water from the kidneys:


With blockade of renal water excretion, the Adrogue-Madias equation will be more accurate.  This allows control of the sodium based on fluid administration:


For example, if you wish to stop the rise of sodium, DDAVP may be given and fluid intake stopped.  This will halt intake and output of free water, so the sodium should remain stable.  This approach is easier to achieve than titrating a D5W infusion:  just order the DDAVP, stop fluid inputs, and you're done.  If the patient is neglected for a few hours, the sodium will probably be fine. 

Rescue DDAVP strategy

The risk of osmotic demyelination syndrome depends on the average change in sodium over time, so if the sodium over-corrects this can still be remedied by decreasing the sodium to its original target.  Combining DDAVP with carefully calculated doses of D5W may achieve this.

This is obviously not the preferred strategy for managing sodium.  However, it is important to recognize that sodium over-correction is not an unfixable problem.  Even if the patient seems OK neurologically, it is probably safest to lower the sodium.  By the time symptoms of osmotic demyelination syndrome emerge, the optimal window for intervention has passed. 

Reactive DDAVP strategy

Consider a patient admitted with chronic, asymptomatic hyponatremia due to hypovolemia.  Nothing dramatic must be done initially.  Fluid resuscitation may be undertaken with careful monitoring of the serum sodium concentration.  At some point, vasopressin levels will fall and the sodium will start really climbing.  Once the sodium has increased a fair amount (i.e. perhaps ~8 mM) or urine output accelerates, DDAVP and fluid restriction may be initiated to stop the rise in sodium.  When the DDAVP has been stopped, the sodium will continue to rise:


The physiology underlying this strategy is supported by an observational study of this approach by Rafat 2014.  They showed that DDAVP administration decreased the urine output and increased the urine tonicity, causing a halt in the rate of sodium correction over time:  


The weakness of this strategy is that it initially requires constant vigilance to detect overcorrection, with intervention at just the right moment.  This is not foolproof.  For example, in the Rafat series, about half of patients stillover-corrected their sodium.

Proactive DDAVP strategy

The proactive DDAVP strategy represents the most definitive approach to controlling sodium.  This is performed as follows:
  • DDAVP (2 micrograms IV q8hr) is started immediately and continued until the sodium is close to normal.
  • Sodium is corrected by infusing hypertonic solutions, primarily 3% saline.  Of course, hypertonic bicarbonate could also be used, as discussed last week.  For a patient requiring volume resuscitation, a large volume of normal saline could be used as well.  The key point is that the sodium is increased by a direct effect of the infused solutions.  This differs from approaches based on treatment of the underlying problem and waiting for the kidneys to excrete free water. 
  • Oral fluid intake must be restricted while on DDAVP. 
  • Potassium supplementation should be taken into account as this is osmotically equivalent to sodium (e.g. 40 mEq KCl tablet is roughly equivalent to ~80ml of 3% NaCl). 
  • Medications formulated in D5W should be avoided if possible, or otherwise taken into account (e.g. 100 ml of D5W will negate the effect of ~30ml of 3% NaCl)
  • If volume overload occurs, this may be managed with furosemide. 

An example of how this strategy would work for a patient with severe symptomatic hyponatremia is shown below.  DDAVP is started immediately to block renal free water excretion.  Boluses of hypertonic therapy are provided initially to improve symptoms and raise the sodium by ~5 mM.  Given that the target rise in sodium over the first day is ~6 mM, after the initial increase in sodium, fluid intake is stopped for one day causing the sodium to be stable.  Subsequently, an infusion of hypertonic saline is started to gradually increase the sodium to normal. 



As shown below, a proactive DDAVP approach has two advantages in symptomatic hyponatremia compared to less aggressive management.  First, immediately increasing the sodium will rapidly bring the sodium to a safe level and relieve symptoms.  Second, proactive DDAVP prevents endogenous over-correction.


For patients who have asymptomatic hyponatremia, a proactive DDAVP strategy would consist of simultaneously starting DDAVP and an infusion of 3% saline:


Contraindications to proactive DDAVP

Perhaps the most important contraindication to DDAVP is inability to control oral fluid intake (e.g. due to psychogenic polydipsia).  If DDAVP is given and the patient continues to have significant fluid intake, this will exacerbate the hyponatremia. 

Patients with pure hypervolemic hyponatremia (e.g. heart failure, cirrhosis) will not benefit from this approach.  These patients usually have mild hyponatremia and rarely over-correct their sodium, so there is little rationale for DDAVP.  Additionally, hypertonic saline therapy would worsen volume overload.  However, for a patient with multifactorialhyponatremia (e.g. a patient with profound hyponatremia due to mild heart failure, beer potomania, and thiazide diuretics), a proactive DDAVP strategy may still be considered along with furosemide diuresis. 

For patients with SIADH due to a chronic stimulus (e.g. malignancy), there is little benefit from administering DDAVP.  However, DDAVP won't hurt either (it will probably have no effect).  For patients with SIADH due to reversible factors (e.g. nausea, medications), DDAVP may be beneficial because such patients may over-correct after the cause of SIADH is removed.  Overall, a proactive DDAVP strategy should work fine for any patient with SIADH.  

Evidence supporting the proactive DDAVP strategy

Sood 2013 reported a series of 24 patients admitted with sodium <120 mM treated with a combination of DDAVP and hypertonic saline infusion.  These authors were targeting a rise of sodium of <6 mM within the first 24 hours, and achieved an average increase of 5.8 mM/L.  None of the patients had excessive correction.  Overall the Adrogue-Madias equation appeared to predict changes in sodium reasonably well:



Although this is an uncontrolled case series, it does support the efficacy and safety of this approach.  The only noted adverse event was one patient who developed pulmonary edema requiring diuresis. 

A recent systematic review of DDAVP use concluded that the proactive strategy was associated with the lowest incidence of over-correction.  However, this evidence was mostly derived from the Sood study (MacMillan 2015). 

Vaptans = The opposite of DDAVP


This physiology illustrates the danger of vaptans (e.g. conivaptan, tolvaptan) in hyponatremia.  Vaptans inhibit the vasopressin receptor, causing renal excretion of free water: 


Rapid water excretion may cause sodium over-correction.  Vaptans may cause patients to transition from hyponatremia to hypernatremia with subsequent osmotic demyelination syndrome (Malhotra 2014).  The ability to inadvertently push patients into a hypernatremic state is uniquely dangerous compared to most mechanisms of sodium over-correction (which stop once the sodium normalizes).  Thus, the European 2014 consensus guidelines recommend against using vaptans.

An expert panel funded by the manufacturer of tolvaptan recommended that vaptans could be used in some situations.  Surprisingly, a recent NEJM review article supported the use of vaptans, accepting this expert panel over the European 2014 consensus guidelines.  The review admits that there are no RCTs comparing vaptans to other therapies for hyponatremia.

According to this review, to prevent over-correction the urine output must be replaced with intravenous D5W after the sodium has increased to the target level.  This is exactly the opposite of using DDAVP:  vaptans induce uncontrolled renal water excretion, which must then be replaced.  As discussed above, trying to keep up with renal free water excretion can be difficult.

Conclusions

Perhaps the greatest challenge of managing severe hyponatremia is avoiding sodium over-correction, which may cause permanent neurologic disability.  Understanding the physiology of sodium over-correction allows us to anticipate this, but it is still unclear when it will occur.  DDAVP appears to be the most effective approach to reversing, arresting, or preventing sodium over-correction.  Unfortunately there is little evidence regarding exactly how we should use this.  For patients at the highest risk of osmotic demyelination syndrome, it may be safest to start DDAVP proactively in order to avoid over-correction entirely. 


  • Over-correction of sodium is usually due to recovery of normal renal physiology with excretion of water.
  • DDAVP blocks renal excretion of water, allowing the sodium to be predictably manipulated using the Adrogue-Madias equation.   This can be accomplished using three strategies:
  • Rescue DDAVP strategy:  If the sodium has already over-corrected, DDAVP may be combined with D5W to decrease the sodium.  
  • Reactive DDAVP strategy:  If the sodium is rising at a dangerous rate, this may be temporarily halted with a combination of DDAVP and fluid restriction.  This stops free water input and excretion, causing the sodium to be relatively stable over time.
  • Proactive DDAVP strategy:  For patients at high risk for osmotic demyelination syndrome, it may be safest to start DDAVP immediately.  With this strategy, DDAVP prevents water excretion from the kidneys, so hyponatremia must be treated directly by infusing hypertonic fluids. 
  • Conivaptan and Tolvaptan may cause uncontrolled water excretion and over-correction of the sodium.  This is not recommended.  

Stay tuned:  This is the second part of a three-part series on hyponatremia.  Next week we will proceed further down the rabbit hole to discuss extremely unconventional treatments for hyponatremia.

References: Blogs

Key Papers

Image credits
https://en.wikipedia.org/wiki/Human_brain#/media/File:Sobo_1909_624.png
http://wiki.flightgear.org/File:Lightning-cockpit.jpg
https://en.wikipedia.org/wiki/Autothrottle#/media/File:Thrust_levers_of_an_Airbus_A320.jpg

Unconventional therapies for hyponatremia: Thinking outside the collecting duct

$
0
0


Case: An unusual ICU referral

Some years ago at Genius General Hospital, the ICU was asked to accept a patient from the medicine ward with cirrhosis, confusion, and hyponatremia (Na 125 mM) for hypertonic saline therapy.  There was concern that the patient's confusion was due to his hyponatremia. 

Chart review showed that his hyponatremia was chronic, and not much worse than his baseline.  Additionally, although he was being prescribed lactulose for hepatic encephalopathy, he was refusing most of the doses.  The intensivist's impression was that the patient's confusion was most likely due to hepatic encephalopathy.  It was recommended that the patient's lactulose dose be increased as simultaneous management for both his hepatic encephalopathy and hyponatremia.

With effective lactulose therapy, the patient's sodium increased gradually and his confusion lifted.  He did not require ICU transfer.  It is unclear whether his confusion resolved due to treatment of hepatic encephalopathy or hyponatremia.  Indeed, hyponatremia may be a component of the pathogenesis of hepatic encephalopathy, so these disorders are probably intertwined (Iwasa 2015). 

Introduction

The last two posts on hyponatremia have focused on the use of hypertonic sodium chloride and sodium bicarbonate.  This might create the incorrect impression that hyponatremia is due to a sodium deficiency.  Instead, the core physiologic abnormality of hyponatremia is generally water excess.   As discussed last week, the renal retention of water is typically what drives hyponatremia. 

This distinction becomes important when managing hypervolemic hyponatremia (mostly patients with heart failure or cirrhosis).  These patients have an excess of both sodium and water, with a disproportionate excess of water.  This occurs due to reduced cerebral perfusion causing excessive secretion of vasopressin and renal water retention:



Aside from emergencies, hypertonic saline works poorly in hypervolemic hyponatremia, because it will worsen volume overload.  Fluid restriction and furosemide may be better options.  However, these may be poorly tolerated or ineffective.

Recent research has focused on vasopressin inhibitors in this situation (“vaptans,” e.g. conivaptan and tolvaptan).  By blocking vasopressin, these medications may facilitate renal excretion of water (figure below).  However, explored previously, vaptans pose a risk of causing excessive water loss in the urine, with over-correction of the sodium.  Additionally, due to potential liver toxicity, the FDA has recommended that tolvaptan be avoided in patients with liver disease and that it should never be used for longer than 30 days.  


This post will explore the potential use of osmotic laxatives and osmotic diuretics for the management of non-emergent, hypervolemic hyponatremia.  This is usually mild and chronic, so the treatment goal is often a very gradual increase in sodium (e.g. ~3 mM/L per day).   These therapies might also be useful for SIADH.

Dosing:  Targeting the amount of electrolyte-free water loss

The change in serum sodium per liter of electrolyte-free water excreted can be estimated using the formula below:

 
This usually yields a value of about 2-4 mM/L increase in sodium concentration per liter water output.  Thus, targeting a net loss of ~0.75 liter/day of free water may be a reasonable initial target for most patients.  Note that fluid intake must be restricted.

Osmotic laxatives:  Lactulose

Basic physiology of osmotic laxatives

The colon is permeable to water, so the osmolarity of stool is equal to blood (~300 mOsm/L).  Therefore, ingestion of any nonabsorable substance with an osmolarity of >>300 mOsm/L will function as an osmotic laxative.  Water will be drawn into the gut until the bowel contents reach an osmolarity of ~300 mOsm/L.  Ultimately, bowel contents are excreted.   The entire process results in the excretion of water.

Dosing of lactulose

Even while taking lactulose, there are still some electrolytes within the stool.  Hammer 1989 found that with administration of 125 grams/day of lactulose, the average sum of sodium and potassium concentrations was 50 mEq/L.   Thus, only ~66% of the stool volume is electrolyte-free water at this lactulose dose.   This is a rough number, because with increasing lactulose dose the fraction of free water in the stool increases.  Overall this suggests targeting a slightly higher stool output, perhaps 1-1.5 liters/day.  

Lactulose dosing is a bit tricky, because there is some ability of colonic bacteria to metabolize lactulose.  Therefore, there is a threshold dose below which there is little laxative effect.  Studies on healthy volunteers suggest that a dose around 125 grams/day might correspond with a target a stool output of about 1.3 liters/day (Hammer 1989).   Given that most lactulose solutions contain 0.66 grams/ml, this corresponds to 45 ml four times daily.  Since individual responses vary, it may be safest to start with a lower dose (e.g. 30 ml four times daily) and titrate upward.  These doses are within the dose range used for acute hepatic encephalopathy.

Monitoring osmotic diarrhea

Lactulose therapy may be monitored on the basis of stool output, patient weight, and serum sodium concentration.  If the sodium increases greater than the desired amount, this may be corrected by decreasing the lactulose dose and administering water (either enterally or intravenous D5W). 

Efficacy of lactulose to increase serum sodium

Practitioners experienced in using lactulose in critically ill patients are familiar with the gradual increase in sodium which is observed when lactulose is administered to patients with limited fluid intake (e.g. NPO or intubated).   Such patients often require substantial replacement of free water to prevent hypernatremia.  Several reports describe hypernatremia when these trends in sodium were not attended to (e.g., Lukens 2011, Warren 1980, Nelson 1983).  In particular, Kaupke 1977 reported a case wherein lactulose dosed at 50ml four times daily caused an average increase in the sodium concentration of 4 mEq/L/day, consistent with dosing considerations discussed above.  The ability of lactulose to cause hypernatremia is widely acknowledged in the literature. 

Safety of lactulose

Lactulose appears to be very safe.  It is available over the counter in many countries.  Side effects consist predominantly of bloating and flatulence.  Lactulose has a long track record of safety in the ICU, even when used aggressively in very ill patients with liver disease.

Use of lactulose in specific situations

Cirrhosis:  Lactulose may be an ideal agent for patients with cirrhosis, mild hypervolemic hyponatremia, and altered mental status.  As in the case above, lactulose may simultaneously treat hyponatremia and hepatic encephalopathy.

Heart failure:  There is no direct evidence regarding the use of lactulose in heart failure.  Prior to using lactulose, volume status should be assessed.   Hyponatremia in heart failure is typically an indication of poor systemic perfusion.   For patients who are not intravascularly volume overloaded, volume loss due to lactulose could further impair hypoperfusion.  However, for patients who are intravascularly volume overloaded, lactulose is a logical consideration.  

Limited resources:  Lactulose could be useful in situations where there were limited resources (e.g. treatment of a patient with SIADH without the capacity to infuse hypertonic saline and check sodium levels frequently).

Urea & mannitol

Urea is an unusual substance because it is absorbed by the GI tract and subsequently excreted extensively in the urine.  When excreted in the urine, it pulls water along with it, acting similarly to an osmotic diuretic.   However, it is unique compared to other osmotic diuretics, which generally cannot be absorbed via an oral route.

The ability to promote electrolyte-free secretion of water in the urine has made urea a guideline-recommendedtherapy for SIADH.  Urea also appears to be effective for hypervolemic hyponatremia (Decaux 2014).  Unfortunately, urea is not available in the US for unclear reasons.  Some authors suggest that urea is too bitter for the delicate "north American palate," but this is nonsense (Vandergheynst 2015).  Perhaps marketing urea is simply unprofitable.

The closest drug to urea which is available in the USA might be intravenous mannitol.   Mannitol functions as an osmotic diuretic, causing excretion of electrolyte-free water in the urine.  However, the free water excretion induced by mannitol is variable (Keyrouz 2008).  The combination of mannitol and furosemide promotes more consistent water excretion, but further evidence is needed (Pollay 1983,  Porzio 2000).  


  • Patients with cirrhosis or heart failure often have mild, subacute or chronic hyponatremia which may be difficult to treat (e.g. hypertonic saline is generally avoided because it will cause volume overload).
  • Vaptans are one option for treating hypervolemic hyponatremia.  However, they are expensive, carry a risk sodium over-correction, and conivaptan is contraindicated in liver disease.
  • Urea may be a good option to facilitate excretion of water in the urine, but it is not available in the USA.
  • Lactulose is an osmotic laxative, which promotes loss of water in the stool.  This may be an inexpensive and safe approach to gently correct hyponatremia.

This blog is co-authored with Dr. Paul Farkas, senior consultant in gastroenterology.

This is the last of a series of three posts on hyponatremia: 

Conflicts of Interest: None

Recognizing and managing paradoxical reactions from benzodiazepines & propofol

$
0
0

A perplexing case

A young man with a history of seizures and alcoholism presented with a generalized seizure.  His seizure responded to lorazepam, but he was intubated for airway protection and was transferred to the Genius GeneralICU.  He was also loaded with levtiracetam to prevent further seizures. 

Overnight he developed agitation.  Despite increasing his propofol to 80 mcg/kg/min and adding fentanyl, his agitation worsened (e.g. sitting bolt upright in bed).  There was concern that he was likely suffering from alcohol withdrawal, which may also have contributed to his seizure.  Consequently, he was loaded with 10 mg/kg of intravenous phenobarbital, with the goal of treating his alcohol withdrawal and also reducing the likelihood of recurrent seizures.  Given concern about the risk of propofol infusion syndrome, propofol was weaned off after the phenobarbital was delivered.

Subsequently he entered a stupor.  Fentanyl was stopped as well, but he remained very difficult to arouse.  What had happened?  Generally, 10 mg/kg phenobarbital should not cause that degree of sedation.

After a few hours, he woke up and was successfully extubated.  It ultimately turned out that his seizure was due to noncompliance with his antiepileptic, probably not alcohol withdrawal.  The remainder of his recovery was uneventful, without any recurrent seizures or any symptoms of withdrawal. 

In retrospect, he may have been having a paradoxical reaction to propofol.  Perhaps he was agitated because of the propofol, not despite it.  His transition from agitation to stupor may have been due to the combined effects of discontinuing propofol and adding phenobarbital (in the context of some residual lorazepam and post-ictal confusion).  Over time, his post-ictal state improved and the lorazepam was metabolized, allowing him to wake up.

Introduction

Benzodiazepines and propofol are widely used in critical care.  Both have similar mechanisms of action, based on activating inhibitory GABA receptors.  Rarely, these drugs may cause paradoxical agitation.  In the setting of an elective procedure, this diagnosis may be obvious.  Conversely, in the context of a complex critically ill patient, recognizing this diagnosis may be nearly impossible.  Regardless, this is an important diagnosis to recognize because it requires specific management.

Clinical features of paradoxical reactions

Central features of paradoxical reactions (PRs) are emotional lability, agitation, excessive movement, and confusion.  This may be associated with increased autonomic activity including tachycardia, hypertension, and tachypnea.  Unfortunately, there is no uniform definition of a PR, with the above description based on case studies describing PRs following benzodiazepines.  There is less literature describing PRs induced by propofol, but clinical features appear to be similar (Jeong 2011). 

Epidemiology of paradoxical reactions

Procedural sedation

The rate of PRs following benzodiazepine is probably on the order of 1-2% (Tae 2014).  Rates appear to be higher with propofol.  For example, one RCT found a rate of 36% with propofol vs. 5% with midazolam (p<0.05; Ibrahim 2001).  Risk factors include alcoholism, extremes of age, and psychiatric comorbidity. 

Jeong 2011performed a prospective observational study of 190 patients undergoing propofol sedation combined with spinal anesthesia for knee surgery.  Patients with a history of hazardous drinking were much more likely to experience PRs.  This difference was more notable at higher doses of propofol: 

Abbreviations: HD = Hazardous drinking, NHD = Non-hazardous drinking.  Study 1 involved titration of propofol to target a bispectral index of 70-80.  Study 2 involved fixed infusions of propofol at two rates.  A severe PR was defined as requiring physical restraint.  For further details see open-access manuscript here.

Does this happen in critically ill patients?

Based on the frequency of benzodiazepine and propofol use among critically ill patients, PRs would be expected to occur with some regularity.  However, there is no mention of this in the medical literature.  Why not?

This may be due to the difficulty of diagnosing a PR in the ICU.  PRs are largely a diagnosis of exclusion, which may be very difficult in a complicated patient.  Agitation is common among ICU patients, so distinguishing a PR from garden-variety agitation may be like finding a needle in a haystack. 


PRs may be more common in the ICU than appreciated.  For example, occasionally intubated patients are encountered who remain agitated even despite high doses of propofol or benzodiazepine.  We may raise an eyebrow and perhaps comment on the patient's history of alcoholism, but otherwise little attention is paid to this.  Perhaps these patients are having untreated PRs. 

Neurobiology of paradoxical reactions

This remains unclear.  Some idiosyncratic reactions may be based on genetic variability.  Notably, one report documented a pair of identical twins who both had dramatic reactions to midazolam (Short 1987).  The increased rate of PR in alcoholism might relate to changes in GABA receptors and GABAergic pathways induced by alcoholism (e.g. differences in receptor subunit composition; Bhandage 2014).

Management of paradoxical reactions

Step 1:  Stop the offending agent

The most important aspect is recognizing the PR and discontinuing the causative medication.  Some early investigators felt that that PR might reflect "undersedation" which would respond to dose escalation, but this has proven to be counterproductive in an RCT and case reports involving benzodiazepines (Golparvar 2004, Fiset 1992).  Jeong 2011 found the highest rate of severe PRs when the propofol was titratedto a target sedation level, suggesting that responding to agitation with higher propofol doses simply aggravated the PR.  With propofol it may be possible to overpower a PR with coma-inducing doses, but this is an undesirable sedation strategy. 

Failure to diagnose that the patient is experiencing a PR may lead to progressive up-titration of sedative dose, leading to a vicious cycle:


Step #2:  Counteract residual drug: Flumazenil

Propofol will be rapidly metabolized, so residual drug is only an issue for benzodiazepines.  Flumazenil appears to be an excellent treatment if not contraindicated (e.g. by chronic benzodiazepine use).  About a dozen case reports describe flumazenil as being uniformly effective in rapidly terminating a PR due to a benzodiazepine.  Furthermore, flumazenil appears to terminate the PR while preserving amnesia and sedation, generally allowing completion of the procedure.

Step #3:  Add a non-GABA sedating medication

If the patient still requires sedation following the above steps, a sedative that doesn’t interact with GABA receptors may be added.  There are several reports of patients who had a PR with one benzodiazepine, yet were able to tolerate a different benzodiazepine (Mancuso 2004).  However, for acute management it may be safest to avoid this class of drugs entirely.

Non-GABA options include opioids, antipsychotics, dexmedetomidine, and ketamine.  There is little evidence to determine the best option:
  • Ketamine: One RCT of 24 children <6 YO with PRs due to midazolam found ketamine 0.5 mg/kg to be effective (Golparvar 2004).  However, since all patients in this study were subsequently intubated for surgical procedures, it is possible that general anesthesia may have masked any emergence reactions from ketamine.
  • Haloperidolwas successful in one case report (Mancuso 2004).
  • Opioids:  One prospective series of 4,140 patients undergoing gastrointestinal endoscopy showed that higher doses of opioid and lower doses of benzodiazepine correlated with a lower risk of PR, implying that opioids would not worsen a PR (Tae 2014).


  • Occasionally benzodiazepines induce a paradoxical reaction marked by agitated delirium with emotional lability and restlessness.  This may be more common with propofol.
  • Risk factors for paradoxical reactions include psychiatric comorbidity, extremes of age, and alcoholism.
  • Treatment consists of discontinuing the offending agent and reversing it if possible (with flumazenil for PRs due to benzodiazepine).  If needed, non-GABA sedatives may be used (e.g. ketamine, haloperidol, opioids).
  • Failing to recognize and treat a PR might lead to a vicious cycle of ongoing agitation:



Related blogs: The paradoxical excitation response by ScanCrit

Stay tuned, this is the first of a series of two posts about GABA receptors run amok in critically ill patients. 

Conflicts of Interest: None.  

Image credits: 
http://www.freeimages.com/photo/closeups-eyes-rage-emotion-1478626

https://en.wikipedia.org/wiki/Iceberg#/media/File:Iceberg.jpg

The SPLIT trial: Internal vs. external validity

$
0
0

Introduction

Resuscitation with large volumes of normal saline (NS) causes hyperchloremic metabolic acidosis.  Some evidence suggests that hyperchloremic metabolic acidosis may impair renal function, but the clinical relevance of this remains unclear.  If hyperchloremic metabolic acidosis is truly detrimental, this would be one argument to use balanced crystalloids rather than NS. 


SPLIT trial summary

This trial randomized 2,278 patients admitted to the ICU to receive normal saline or plasmalyte.  Nearly all ICU patients requiring crystalloid were included.  Overall, 57% of subjects were admitted to the ICU following elective surgery, whereas 14% of subjects were admitted from the emergency department.  The most common reason for admission was cardiac surgery (49%), compared to only 4% of subjects with septic shock. 

Patients in both groups received nearly the same volumes of crystalloid.  The day before enrollment, patients received a median of one liter of fluid (mostly balanced crystalloids).  Subsequently patients in both groups received a median of 2000 ml of study fluid over their entire ICU stay.  Most of this fluid was provided on the first day:


There were no differences in any outcome (renal failure, dialysis, serum creatinine, or mortality). 

Excellent internal validity

This study has outstanding internal validity:  it is a well-powered randomized trial with excellent enrollment and minimal bias.  This trial provides strong evidence that if you work in one of the ICUs where the study was performed, it doesn't matter which crystalloid you use.  If the authors of this trial decide to abandon using plasmalyte in their own practice, they would be on solid footing.

Limited external validity

How does this data apply to other situations?  A broader interpretation of the study is that administration of 1-2 liters of normal saline would not increase the risk of renal failure compared to plasmalyte.  This is not particularly controversial.  Even the most ardent supporters of balanced crystalloid would probably agree that fluid selection doesn't make a big difference at a volume of 1-2 liters.  The proposed mechanism of nephrotoxicity due to saline is induction of a hyperchloremic metabolic acidosis, which tends to occur with larger volumes of fluid. 

Unfortunately, this study doesn't answer the more pertinent question, which is the safety of larger volumes of saline.  In the USA, a typical patient with septic shock may receive 3-4 liters of crystalloid, while a patient with severe diabetic ketoacidosis might need 4-6 liters.

Ideally, an RCT would clarify whether hyperchloremic metabolic acidosis causes renal failure, which is the true physiologic question (figure below).  This would require comparing NS with a balanced crystalloid, with administration of sufficient volumes of fluid to induce a significant hyperchloremic metabolic acidosis.  Unfortunately, the SPLIT trial does not include information about whether patients receiving normal saline developed a significant hyperchloremic metabolic acidosis.  Therefore, this question has not been addressed. 


The external validity of this study is also limited by the patient composition:
  • Patients included in this study were not very sick, with only a 9% rate of acute kidney injury and a 3% rate of dialysis.  This may reflect the inclusion of patients transferred to the ICU following elective surgery.  As the authors noted, these results may not apply to sicker patients. 
  • One common cause of renal failure is sepsis-associated acute kidney injury, which has a different pathogenesis compared to other types of renal injury (Gomez 2014).  Given that only 4% of the patients in this study had sepsis, it is unclear whether these results apply to sepsis resuscitation. 

Incorrect to make generalizations about all balanced crystalloids


Studies often focus on the divide between NS and balanced crystalloids (e.g. plasmalyte and LR).  However, there are also substantial differences between plasmalyte and LR: 
  • Plasmalyte contains 23 mM of sodium gluconate, which is mostly excreted unchanged in the urine and might even act as an osmotic diuretic.
  • Plasmalyte contains 27 mM of sodium acetate, which the body converts into bicarbonate.  Concerns have been raised about potential vasodilatory and pro-inflammatory effects of acetate (Davies 2011).  
  • LR contains 28 mM of sodium lactate, which the body converts into bicarbonate.  Although lactate has a bad reputation due to its association with shock, lactate production is often an adaptive physiologic response to stress (e.g. sodium lactate may be used as a fuel by the heart and brain). 

Normal saline is occasionally referred to as "abnormal saline" due to its physiologic abnormality, but plasmalyte is also quite abnormal.  There is nothing physiologic about infusing sodium gluconate and sodium acetate.  Among all of these solutions, LR is arguably the most physiologic because it is a balanced crystalloid constructed from anions normally present in the blood (chloride and lactate).

Comparison of NS vs. plasmalyte is complicated because the renal effects of gluconate and acetate are poorly understood.  Therefore, a trial of NS vs. plasmalyte is simultaneously testing three unknowns:  the effect of gluconate, the effect of acetate, and the effect of non-anion gap metabolic acidosis.  This makes it difficult to understand the results. 

For now, LR remains my resuscitative crystalloid of choice for most patients.  A better understanding of the role of LR in resuscitation would require a trial directly comparing NS vs. LR.  It would not be valid to extrapolate results obtained with plasmalyte to the use of LR.


  • The SPLIT trial reveals that low volumes of normal saline (e.g. two liters per entire ICU stay) produce the same renal outcomes as plasmalyte.
  • The SPLIT study does not reveal whether larger volumes of normal saline are equivalent to plasmalyte.
  • The SPLIT study does not clarify whether hyperchloremic metabolic acidosis is safe.
  • Differences between plasmalyte and LR make it incorrect to assume that results obtained with plasmalyte will apply to LR. 
  • Although this study is well designed with excellent internal validity, it adds little to our understanding of large-volume resuscitation. 


Related links from this blog

 More on the SPLIT trial

Image credits: http://www.freeimages.com/photo/strange-tree-1360713

Phenobarbital monotherapy for alcohol withdrawal: Simplicity and power

$
0
0

Case example  

A middle-aged man was admitted to the ICU for refractory alcohol withdrawal.  Prior to arriving in the ICU he had been treated aggressively with an escalating regimen of IV diazepam, without any improvement.  Upon arrival in the ICU he had impressive tremors but was not delirious.

He was given an initial dose of 260 mg IV phenobarbital followed by 130 mg IV Q30 minutes as needed.  With each dose, his symptoms improved incrementally.  After receiving about 1200 mg of phenobarbital his symptoms resolved, leaving him awake and calm.  He was observed for a day prior to transfer out of the ICU, but required no additional treatment.

Introduction

Benzodiazepines have traditionally been the mainstay of treatment for alcohol withdrawal.  This dates back to a RCT in 1969 which compared a benzodiazepine (chlordiazepoxide), antipsychotics, antihistamines, and thiamine.  Unfortunately, there has never been an adequate RCT comparing a benzodiazepine versus barbiturate. 

Recently there has been increasing recognition that phenobarbital has advantages compared to benzodiazepines.  Phenobarbital has been shown to be beneficial both for initial up-front loading, and also for patients with symptoms refractory to benzodiazepines.  As explored in a prior post, this led to a treatment strategy which started and ended with phenobarbital:


Over time, emerging evidence and clinical experience has led us to doubt whether benzodiazepines offered an advantage compared to phenobarbital monotherapy:


Advantages of phenobarbital monotherapy

Neuroscience:  Phenobarbital is theoretically superior to benzodiazepines

Alcohol suppresses the brain via multiple mechanisms, including enhancement of inhibitory GABA receptors and suppression of excitatory glutaminergic receptors.  The brain adapts to chronic alcoholism by down-regulatinginhibitory GABA receptors and up-regulatingexcitatory glutaminergic receptors (Rao 2015).  Such adaptation allows alcoholics to survive with blood alcohol levels which would kill most people.  Unfortunately, this also causes withdrawal:  both down-regulation of inhibitory GABA receptors and up-regulation of excitatory glutamate receptors excites the brain. 

Benzodiazepines stimulate inhibitory GABA receptors, which may improve alcohol withdrawal.  However, low GABA activity is only part of the problem.  In contrast, barbiturates have dual activity, simultaneously enhancing GABA activity and suppressing glutaminergic activity.  This dual mechanism of action is well matched to the pathophysiology of alcohol withdrawal, making barbiturates theoretically superior to benzodiazepines. 

Clinical experience:  Barbiturates are more powerful than benzodiazepines

Synergistic activity on two of the most important neurotransmitter systems makes barbiturates more powerful than benzodiazepines.  This power is observed in other neurologic disorders as well (e.g. seizures are often refractory to benzodiazepines, but rarely refractory to barbiturates).  Barbiturates are the hammer of neurotherapeutics:  there is no positive symptom which will not respond to a barbiturate.  Regarding alcohol withdrawal, it is widely recognized that a subset of patients with severe withdrawal will fail to respond to benzodiazepines, yet will subsequently respond to phenobarbital (as in the case above; Hack 2006).

Less delirium & paradoxical reactions?

Benzodiazepines are notorious for causing delirium.  For example, one study of intubated ICU patients found that nearly all patients who received >20 mg of lorazepam developed delirium (Pandharipande 2006).  Less commonly, lower doses of benzodiazepine elicit agitated delirium, known as a paradoxical reaction.  As discussed last week, paradoxical reactions are more common in alcoholism, with a rate of ~2% in this population (Tae 2014).  Is it possible that benzodiazepine-induced delirium and paradoxical reactions complicate the treatment of alcohol withdrawal without our being aware of these reactions? 

Moore 2014 reported a retrospective case series describing the use of flumazenil to evaluate for benzodiazepine-induced delirium in patients undergoing alcohol withdrawal.  Their general practice was to suspect benzodiazepine-induced delirium and trial flumazenil among patients with persistent confusion whose withdrawal seemed to have resolved (e.g. normal vital signs, no hyperreflexia).  Among 74 patients in whom a response to flumazenil was recorded, 84% improved while only two patients experienced increased anxiety.  This study suggests that much of the prolonged delirium observed in patients undergoing alcohol withdrawal may actually be benzodiazepine-induced delirium rather than alcohol withdrawal itself: 


Unlike benzodiazepines, phenobarbital doesn't cause paradoxical reactions (Ives 1991).  This may reflect phenobarbital's more balanced inhibitory effect on the brain via two neurotransmitters, which protects against disinhibition.   

Simplified pharmacology: Choose one GABAergic medication

Benzodiazepines and barbiturates act synergistically on the GABA receptors (benzodiazepines increase the duration of channel opening, whereas barbiturates increase the frequency of channel opening).  In some situations, this may cause patients to be unexpectedly sensitive to these medications.  For example, a patient loaded with 2,000 mg of phenobarbital might be at risk from over-sedation if treated with a usual dose of benzodiazepine.

The clinical effect of phenobarbital alone is more predictable.  As discussed previously, phenobarbital administration elicits very predictable serum level of drug (figure below).  In the absence of confounding factors (e.g., benzodiazepines, other neurologic problems), the safe level of phenobarbital is well established.  This dose-response relationship may be helpful when considering safe drug doses and monitoring patient response to therapy.  Removing benzodiazepines from the picture simplifies this pharmacology and allows these relationships to be used more reliably.

Relationship between cumulative phenobarbital dose and plasma phenobarbital concentration among patients treated for alcohol withdrawal (Tangmose 2010).  We have added green lines indicating the plasma therapeutic range for phenobarbital (64-172 micromol/L = 15-40 ug/ml), an orange line indicating the level at which mild signs of toxicity are usually noted such as ataxia and nystagmus (225 micromol/L = 50 ug/ml), and a red line indicating the lowest level which has been associated with stupor or coma (>280 micromol/L = 65 ug/ml)(Lee 2013). 

Seizure prophylaxis

Seizure is one of the most dangerous complications of alcohol withdrawal.  Among the agents used to treat alcohol withdrawal, barbiturates probably have the greatest anti-epileptic activity.  Combining this efficacy with the long duration of phenobarbital may provide patients with excellent seizure prophylaxis. 

There is little evidence regarding the relative efficacy of benzodiazepines versus barbiturates for alcohol withdrawal seizure.  One report found that the addition of primidone (a phenobarbital precursor) to chlordiazepoxide reduced the seizure rate (table below; Smith, 1976).  This suggests that in alcohol withdrawal, as with other causes of seizure, barbiturates offer anti-epileptic activity above and beyond that of a long-acting benzodiazepine. 


Improved pharmacokinetics with intravenous phenobarbital?

Historically, phenobarbital has often been given orally for the management of alcohol withdrawal.  When administered orally, the time of onset is less reliable compared to intravenous administration.  Thus, oral administration could increase the risk of administering several doses before the first doses are fully absorbed ("dose stacking"), with eventual overdose. 

Theoretically, intravenous phenobarbital should allow greater control over pharmacokinetics, with avoidance of dose stacking and improved safety.  It is unclear how much added benefit intravenous administration provides compared to oral phenobarbital.  For critically ill patients, the intravenous route is typically used to err on the side of safety.  However, for less ill patients without gastrointestinal problems, slower administration of oral phenobarbital has been demonstrated to be safe (Tangmose 2010). 

Evidence regarding phenobarbital monotherapy

“In Denmark barbital, a long-acting barbiturate, has been the drug of choice in the treatment of DT for many years.  It was introduced in the beginning of this century (Moller 1909).  In following discussions the importance of repeated, often large doses, (that is, 0.5-1 gram), in the initial stage of the disorder was stressed.  The aim of the treatment was to sedate the patient to such a degree that he fell asleep and then slept for several hours.  After this “critical sleep” the symptoms often disappeared completely.  The treatment as outlined in the first reports was found so favorable that barbital has been preferred by Danish psychiatrists for several decades” - Kramp 1978

Although barbiturate monotherapy is currently perceived as a new idea, it is actually a very old idea.  In Denmark, barbiturate monotherapy was used for over a century with considerable success.  In the United States, a survey of inpatient alcohol treatment programs performed in 1992 estimated that 11% of all patients received barbiturates (Saitz 1995).  Unfortunately this practice largely pre-dated evidence-based medicine.  Available studies are as follows: 

Kramp 1978  This is the only available prospective RCT that compares barbiturate to benzodiazepine for patients with severe alcohol withdrawal.  91 patients were randomized to receive intramuscular diazepam vs. oral barbital (an early long-acting barbiturate).  To preserve blinding of physicians and patients, all patients were treated simultaneously with intramuscular injections and oral medication, one of which was a placebo.  Although this study has substantial methodological flaws, barbiturate was found to be superior among the patients with the most severe withdrawal symptoms (table below).  Three patients in the diazepam group were initially refractory to therapy, leading clinicians to un-blind themselves.  Overall these results are consistent with our current recognition that a subset of patients with severe withdrawal will be refractory to benzodiazepines.


Ives 1991  These authors describe a protocol involving a loading dose of 15 mg/kg phenobarbital followed by a fixed taper over a week.  For breakthrough agitation, patients were allowed to receive low doses of lorazepam.  Although little data is provided, they reported that this protocol was used successfully in over seventy patients at the University of North Carolina at Chapel Hill from 1982-1990.  We are aware of success utilizing a similar protocol more recently at some hospitals in the Northeast USA. 

Michaelsen 2010  This is a retrospective cohort study comparing outcomes from the treatment of delirium tremens at two hospitals in Denmark between 1998-2006.  During this period one hospital (Rigshospitalet) used oral phenobarbital, whereas the other hospital (Bispebjerg) transitioned from oral phenobarbital to intravenous diazepam in 2002.  There was no difference between diazepam and phenobarbital in terms of length of delirium tremens, mortality, or pneumonia (table below).  A sub-population (9%) in the diazepam group failed treatment and required phenobarbital. 


Transition to diazepam correlated with an increase in the rate of delirium associated with alcohol withdrawal at Bispebjerg (53 cases from 1998-2002 vs. 88 cases from 2002-2006).  Perhaps this reflects some patients who developed benzodiazepine-induced delirium. 

Hendey 2011  This is a RCT comparing phenobarbital vs. lorazepam for patients presenting to the emergency department with mild to moderate alcohol withdrawal.  44 patients were randomized to treatment with lorazepam or phenobarbital (260 mg IV followed by 130 mg IV PRN), with the majority subsequently discharged home.  The two treatments were equivalent, although the study was underpowered. 

Phenobarbital monotherapy: Nuts and bolts

Phenobarbital monotherapy is extremely simple, as it amounts to a dose-titration using a single medication.  Phenobarbital has a half-life of about three days, so successive doses will accumulate in an additive fashion.  The main requirement for using phenobarbital is patience, because it may take some time to reach an effective level.  However, this time investment is worth it, because once a therapeutic level is reached, little additional therapy may be needed (the phenobarbital will gradually auto-titrate off, providing therapy for days). 

The main decision which needs to be made is whether the patient qualifies for an initial loading dose of 10 mg/kg ideal body weight (1).  For most patients who are initially presenting with alcohol withdrawal, 10 mg/kg of phenobarbital may be given safely (discussed previously here).  This dose will produce a serum level of phenobarbital around 15 ug/ml, which by itself isn't nearly enough to cause respiratory suppression.  Indeed, Ives 1991 used a 15 mg/kg loading dose divided over three doses.  However, if the patient has other active neurologic issues or has received a significant amount of sedating medication (especially benzodiazepine), there is a possibility that this dose could cause excessive sedation.  When in doubt, it is safer (albeit slower) to omit the loading dose and simply use an incremental IV phenobarbital titration:


As discussed above, we prefer to use intravenous phenobarbital initially until there is resolution of symptoms (which will typically occur in the emergency department or intensive care unit).  Following improvement and transfer out of these locations, patients may be located in a psychiatric or medical ward, which is unfamiliar with the use of intravenous phenobarbital.  In this situation, oral or intramuscular phenobarbital may be used for mild or moderate withdrawal symptoms (Tangmose 2010). 

Phenobarbital and benzodiazepines act synergistically on the GABA receptors.  Therefore, after receiving a large cumulative dose of phenobarbital (>>10 mg/kg), patients may be at increased risk of over-sedation if they receive benzodiazepines.  This should be communicated to providers who will be caring for these patients after leaving the ICU or ED.  Ideally, patients who have been stabilized using a large dose of phenobarbital should continue with a phenobarbital monotherapy strategy. 


  • Although benzodiazepines are regarded as the mainstay of treatment for alcohol withdrawal, there has never been an adequately powered RCT comparing benzodiazepines vs. phenobarbital.
  • Benzodiazepines occasionally fail to control alcohol withdrawal, and may promote agitated delirium.  In contrast, phenobarbital is more effective and doesn't cause paradoxical agitation.
  • Some countries have extensive experience treating alcohol withdrawal with phenobarbital monotherapy.  Available evidence supports the safety and efficacy of this approach.
  • Phenobarbital monotherapy consists of a gradual dose titration as shown below.  Once a therapeutic phenobarbital level is reached, this will gradually auto-taper and provide ongoing protection from seizures or recurrent withdrawal. 


Coauthored with Dr. Ryan Clouser, neurointensivist colleague and drinking buddy.

Related posts:  Delerium Tremens part I - Posted last year, this covers some background about alcohol withdrawal.  Our current approach has changed from the algorithm described in that post, but the general rationale and pharmacology is the same.  

Notes

(1) We use ideal body weight based as previously described in alcohol withdrawal (e.g. Ives 1991).  Since phenobarbital is water-soluble, dosing based on actual weight in the setting of morbid obesity could lead to excessive doses. 

Image credits: https://en.wikipedia.org/wiki/Shaolin_Kung_Fu#/media/File:Shi_DeRu_and_Shi_DeYang.jpg


2015 ACLS Guidelines: What happened to VSE?

$
0
0

Introduction

In 2008 and 2013, two prospective RCTs from Greece reported benefits from the combination of vasopressin, steroids, and epinephrine (VSE) for in-hospital cardiac arrest.  However, other studies investigating the addition of vasopressin alone to epinephrine have been negative.  Consequently, vasopressin has been removed from the AHA/ACC algorithms, with a specific recommendation against the use of vasopressin in combination with epinephrine.  Meanwhile, these same guidelines contain a Class IIb recommendation to consider VSE for in-patient cardiac arrest.  How should we approach this? (1)


VSE:  Evidence about vasopressin, steroid, and epinephrine

Mentzelopoulos 2009

Thiswas a single-center prospective double-blind trial which randomized 100 patients with in-hospital arrest to epinephrine vs. epinephrine plus a combination of three interventions:  vasopressin 20 IU for up to five cycles of CPR, methylprednisolone 40 mg IV during CPR, and tapered stress-dose hydrocortisone (300 mg/d) for patients with post-arrest shock.  Patients treated with VSE had improved return of spontaneous circulation (ROSC; 81% vs. 52%; p=0.003) and survival to hospital discharge (19% vs. 4%; p=0.02).  Results were perhaps most dramatic among patients who developed post-resuscitation shock, in whom survival to discharge was 30% with VSE (8/27 patients) versus none in the control group (0/15; p=0.02).  Patients receiving VSE had decreased levels of pro-inflammatory cytokines, improved hemodynamics, and less organ failure:



Mentzelopoulos 2013

Thiswas a larger replication of the 2009 study, with attention paid to address the weaknesses of the 2009 study.  Rather than a single-center study, this was performed at three centers in Greece.  The study was better powered, with an increase from 100 to 268 patients with in-hospital cardiac arrest.  Finally, the primary outcome was more patient-centered: discharge with good neurologic function.  The intervention (VSE) was exactly the same as in the 2009 study.

The results were nearly identical to the 2009 study.  Patients in the VSE group had a higher rate of ROSC (84% vs. 66%; p=0.005) and discharge with good neurologic outcome (14% vs. 5%; p=0.02).  Compared to patients with post-arrest shock in the control group, patients with post-arrest shock in the VSE group had an improved rate of discharge with good neurologic outcome (21% vs. 8%, p=0.02):


V-E:  Evidence about vasopressin and epinephrine

Gueugniaud 2008 performed a double-blind RCT comparing epinephrine vs. epinephrine plus 40 IU vasopressin during the first two cycles of CPR among out-of-hospital cardiac arrest.  Despite being very well powered (n = 2,894), there was no evidence of benefit from adding vasopressin (Table below).  However, there were also no adverse events observed in patients receiving vasopressin.  The average dose of vasopressin administered to subjects in this study (77 IU) was nearly identical to the average dose of vasopressin in the two trials of VSE (73 IU and 70 IU).  This study suggests that adding ~80 IU of vasopressin on top of epinephrine has little clinical effect (neither benefit nor harm). 


S-E:  Evidence about steroid and epinephrine

Theoretical evidence suggests a potential benefit of steroid during cardiac arrest.  Correlational studies show higher levels of cortisol in survivors.  Animal studies have suggested that intra-arrest steroid improves the return of spontaneous circulation and neurologic outcomes (Smithline 1993, Katz 1989).

Tsai 2007 performed a prospective non-randomized trial on the effect of a single intra-arrest dose of 100 mg hydrocortisone for out-of-hospital cardiac arrest.  Hydrocortisone was provided whenever it was possible to obtain consent (36/97 patients).  Patients treated with hydrocortisone had improved rates of ROSC (61% vs. 39%, p=0.038), but the same in-hospital mortality (92% vs. 90%).  The greatest differences were observed when the study drug was given quickly after arrest:


Despite the potential for confounding factors, this study supports the concept that steroid improves ROSC.  However, improvement was only transient, suggesting that ongoing steroid administration may be needed for a sustained benefit. 

S:  Theoretical basis for steroid in post-arrest shock

Following ROSC, many patients experience a sepsis-like state characterized by a surge in pro-inflammatory cytokines and vasodilation.  Cardiac arrest impairs the function of the adrenal axis, leaving patients especially vulnerable to post-arrest shock (Varvarousi 2014). 

Although the efficacy of steroid in septic shock remains controversial, it has been demonstrated to reduce the vasopressor requirement.  Similar effects could be beneficial in post-arrest patients who may be especially sensitive to hypotension (which could worsen anoxic brain injury) and adverse effects from vasopressors (particularly pro-arrhythmia). 

Intra-arrest corticosteroid is an attractive concept.  One limitation of anti-inflammatory therapies in sepsis is that the inflammatory cascade has already been unleashed by the time the patient is undergoing treatment.  Alternatively, providing intra-arrest steroid could modulate the inflammatory response as it begins to unfold.

Summary of all evidence

VSE is supported by two RCTs involving a total of 368 patients, which is more than the two studies used as the basis for therapeutic hypothermia (Bernard et al. and the HACA trial, which together included 350 patients).  VSE trials were more rigorous than the hypothermia trials because treating clinicians were blinded to the intervention (in hypothermia trials, patients being cooled received more attention).  The main limitation of the VSE studies is that they were both performed by the same group of investigators. 


Although the efficacyof VSE remains in question, there is considerable evidence that vasopressin and steroid are reasonably safe.  The VSE trials noted no adverse events.  Larger studies of vasopressin have also found this to be safe in cardiac arrest.  Although there is less data about steroid in cardiac arrest, stress-dose steroid has been found to be fairly safe among critically ill patients.  There is occasionally concern that steroid could interfere with healing of acute myocardial infarction, but this appears unfounded (Giugliano 2003). 

Putting evidence into practice

From an evidence-based standpoint, it would be ideal to use the full VSE protocol.  This was given a Class IIb recommendation by the AHA/ACC for in-hospital cardiac arrest. 

However, with negative evidence regarding vasopressin, practice is already moving away from the use of vasopressin in cardiac arrest.  The removal of vasopressin from the 2015 AHA/ACC algorithm will likely accelerate this trend.  Thus, in the near future, ACLS teams may be unprepared to mix and administer vasopressin.  This may impair the ability to perform a full VSE protocol.

Given that the benefit of VSE seems to derive from the steroid component, it may be reasonable to use steroid plus epinephrine in a situation where VSE cannot be logistically achieved.  This combination of steroid and epinephrine is weakly supported by the AHA/ACC guidelines for out-of-hospital cardiac arrest (Class IIb recommendation). 

Regardless of how ROSC is achieved, for patients with post-arrest shock the use of stress-dose steroid should be considered.  No studies have been done specifically to investigate the role of stress-dose steroids for post-arrest shock.  However, in both of the Mentzelopoulous studies, patients with post-arrest shock treated with stress-dose steroids had improved survival. 

In-hospital cardiac arrest (IHCA) vs. out-of-hospital cardiac arrest (OHCA)

One point of contention is whether evidence obtained in one type of arrest is applicable in the other type of arrest.  Ideally, there would be adequate evidence from both settings, but this is not the case.  Evidence about cardiac arrest is so sparse that the adult basic life support guidelines are based partially on studies of babypigs. 



Until we have more evidence, it is probably safe to assume that there are more similarities between adult OHCA and adult IHCA than between adults and piglets.  Furthermore, OHCA and IHCA populations are heterogeneous, so findings derived from either population may be fairly generalizable.  For example, the VSE trials included patients with any rhythm located anywhere in the hospital (ICU, ward, emergency department, or operating room).

This is reminiscent of debates about using targeted temperature management (TTM) for IHCA.  RCTs investigating targeted temperature management have all been done on OHCA patients.  So, if one truly believes that IHCA and OHCA are distinct entities, then targeted temperature management shouldn't be used for IHCA.  Of course, targeted temperature management is currently recommended for both IHCA and OHCA (2).  Thus, there seems to be a double-standard regarding TTM vs. VSE:  why is it acceptable to generalize TTM data from OHCA to IHCA, but not to generalize VSE data from IHCA to OHCA? 

European 2015 Guidelines

Although this post focuses on the AHA/ACC guidelines, the European Resuscitation Council has also released fresh guidelines for 2015.  They state:


It is notable that the ERC and AHA/ACC guidelines make conflicting recommendations, although they were released simultaneously and based on identical evidence.  Despite attempts to be evidence-based, insufficient evidence exists to reach any definite conclusions.


  • VSE is the only pharmacotherapy that has ever been shown to improve survival with good neurologic outcome.  The AHA/ACC weakly recommends VSE (Class IIb) for inpatient cardiac arrest.
  • The addition of vasopressin alone to epinephrine has not improved outcomes, leading the AHA/ACC to recommend against adding vasopressin alone to epinephrine. 
  • In a context where VSE cannot be implemented, a reasonable approach might be to simply add 40 mg of methylprednisolone during CPR with epinephrine.  The AHA/ACC weakly supports this, with a Class IIb recommendation to use steroid for out-of-hospital arrest. 
  • The occurrence of post-cardiac arrest shock is common, with some similarities to septic shock (i.e. excessive inflammation causing vasodilation).  Stress-dose steroid may be considered for these patients.  
  • More evidence is needed, but in the interim it seems reasonable to utilize therapies where the benefit appears to outweigh the risks.  



Related posts from this blog

More information on Mentzelopoulous 2013 study of VSE:

More information about 2015 AHA/ACC & ERC guidelines:

Notes

(1) Note that this blog is written from the perspective of a health-care system which uses epinephrine per AHA/ACC guidelines.  Whether this is the best approach is a question for another day. 

(2) With regards to TTM, this debate has been simplified dramatically by our use of TTM at 36C, which is easier and less risky than cooling to 33C.  Thus, if there is ever a question of whether the patient should receive TTM, it is best to err on the side of caution and just use TTM at 36C.  More discussion about TTM at 36C here. 

Conflicts of Interest: Never.

Image credits:
http://www.cliparthut.com/secretary-cartoons-clipart-7rTvKj.html

https://commons.wikimedia.org/wiki/File:Sow_and_five_piglets.jpg
Viewing all 104 articles
Browse latest View live