Biblio du mois : Novembre 2018

2 décembre 2018La Biblio du Mois


Après ce petit mois de rentrée, il est temps de vous livrer la nouvelle biblio du mois !

Celle-ci est riche avec la suite des études présentées à l’ESICM avec sa foison d’études négatives mais également des études très intéressantes en Anesthésie dont des études avec du Machine Learning pour les initiés.

On n’oublie pas les études sur l’ACR, la ventilation et de l’infectieux avec une revue sur les antibiotiques inhalés.

Bref, c’est Noël avant Noël, y’en aura pour tout le monde !

N’oubliez pas de nous suivre et de participer à notre jeu-concours exclusif avec Arnette : ici !

Et venez à Toulouse avec nous la semaine prochaine pour vous spécialiser dans la gestion péri-opératoire des transplantés : renseignements ici !

Et oui, vous êtes gâtés !





Fausse croyance dans les IPP en réanimation ?



Krag et al., NEJM, 2018

DOI: 10.1056/NEJMoa1714919

lien vers l’article





Prophylaxis for gastrointestinal stress ulceration is frequently given to patients in the intensive care unit (ICU), but its risks and benefits are unclear.


In this European, multicenter, parallel-group, blinded trial, we randomly assigned adults who had been admitted to the ICU for an acute condition (i.e., an unplanned admission) and who were at risk for gastrointestinal bleeding to receive 40 mg of intravenous pantoprazole (a proton-pump inhibitor) or placebo daily during the ICU stay. The primary outcome was death by 90 days after randomization.


A total of 3298 patients were enrolled; 1645 were randomly assigned to the pantoprazole group and 1653 to the placebo group. Data on the primary outcome were available for 3282 patients (99.5%). At 90 days, 510 patients (31.1%) in the pantoprazole group and 499 (30.4%) in the placebo group had died (relative risk, 1.02; 95% confidence interval [CI], 0.91 to 1.13; P=0.76). During the ICU stay, at least one clinically important event (a composite of clinically important gastrointestinal bleeding, pneumonia, Clostridium difficile infection, or myocardial ischemia) had occurred in 21.9% of patients assigned to pantoprazole and 22.6% of those assigned to placebo (relative risk, 0.96; 95% CI, 0.83 to 1.11). In the pantoprazole group, 2.5% of patients had clinically important gastrointestinal bleeding, as compared with 4.2% in the placebo group. The number of patients with infections or serious adverse reactions and the percentage of days alive without life support within 90 days were similar in the two groups.


Among adult patients in the ICU who were at risk for gastrointestinal bleeding, mortality at 90 days and the number of clinically important events were similar in those assigned to pantoprazole and those assigned to placebo.

Nutrition des patients sous ventilation mécanique : Hypercalorique ou normocalorique ?



The TARGET Investigator.NEJM, 2018

lien vers l’article




The effect of delivering nutrition at different calorie levels during critical illness is uncertain, and patients typically receive less than the recommended amount.


We conducted a multicenter, double-blind, randomized trial, involving adults undergoing mechanical ventilation in 46 Australian and New Zealand intensive care units (ICUs), to evaluate energy-dense (1.5 kcal per milliliter) as compared with routine (1.0 kcal per milliliter) enteral nutrition at a dose of 1 ml per kilogram of ideal body weight per hour, commencing at or within 12 hours of the initiation of nutrition support and continuing for up to 28 days while the patient was in the ICU. The primary outcome was all-cause mortality within 90 days.


There were 3957 patients included in the modified intention-to-treat analysis (1971 in the 1.5-kcal group and 1986 in the 1.0-kcal group). The volume of enteral nutrition delivered during the trial was similar in the two groups; however, patients in the 1.5-kcal group received a mean (±SD) of 1863±478 kcal per day as compared with 1262±313 kcal per day in the 1.0-kcal group (mean difference, 601 kcal per day; 95% confidence interval [CI], 576 to 626). By day 90, a total of 523 of 1948 patients (26.8%) in the 1.5-kcal group and 505 of 1966 patients (25.7%) in the 1.0-kcal group had died (relative risk, 1.05; 95% CI, 0.94 to 1.16; P=0.41). The results were similar in seven predefined subgroups. Higher calorie delivery did not affect survival time, receipt of organ support, number of days alive and out of the ICU and hospital or free of organ support, or the incidence of infective complications or adverse events.


In patients undergoing mechanical ventilation, the rate of survival at 90 days associated with the use of an energy-dense formulation for enteral delivery of nutrition was not higher than that with routine enteral nutrition.



DOI: 10.1056/NEJMoa1808217



There are conflicting data on the effects of antipsychotic medications on delirium in patients in the intensive care unit (ICU).


In a randomized, double-blind, placebo-controlled trial, we assigned patients with acute respiratory failure or shock and hypoactive or hyperactive delirium to receive intravenous boluses of haloperidol (maximum dose, 20 mg daily), ziprasidone (maximum dose, 40 mg daily), or placebo. The volume and dose of a trial drug or placebo was halved or doubled at 12-hour intervals on the basis of the presence or absence of delirium, as detected with the use of the Confusion Assessment Method for the ICU, and of side effects of the intervention. The primary end point was the number of days alive without delirium or coma during the 14-day intervention period. Secondary end points included 30-day and 90-day survival, time to freedom from mechanical ventilation, and time to ICU and hospital discharge. Safety end points included extrapyramidal symptoms and excessive sedation.


Written informed consent was obtained from 1183 patients or their authorized representatives. Delirium developed in 566 patients (48%), of whom 89% had hypoactive delirium and 11% had hyperactive delirium. Of the 566 patients, 184 were randomly assigned to receive placebo, 192 to receive haloperidol, and 190 to receive ziprasidone. The median duration of exposure to a trial drug or placebo was 4 days (interquartile range, 3 to 7). The median number of days alive without delirium or coma was 8.5 (95% confidence interval [CI], 5.6 to 9.9) in the placebo group, 7.9 (95% CI, 4.4 to 9.6) in the haloperidol group, and 8.7 (95% CI, 5.9 to 10.0) in the ziprasidone group (P=0.26 for overall effect across trial groups). The use of haloperidol or ziprasidone, as compared with placebo, had no significant effect on the primary end point (odds ratios, 0.88 [95% CI, 0.64 to 1.21] and 1.04 [95% CI, 0.73 to 1.48], respectively). There were no significant between-group differences with respect to the secondary end points or the frequency of extrapyramidal symptoms.


The use of haloperidol or ziprasidone, as compared with placebo, in patients with acute respiratory failure or shock and hypoactive or hyperactive delirium in the ICU did not significantly alter the duration of delirium.



Etude POLAR : Hypothermie prophylactique pour les traumatisés crâniens ?




lien vers l’article



Importance  After severe traumatic brain injury, induction of prophylactic hypothermia has been suggested to be neuroprotective and improve long-term neurologic outcomes.

Objective  To determine the effectiveness of early prophylactic hypothermia compared with normothermic management of patients after severe traumatic brain injury.

Design, Setting, and Participants  The Prophylactic Hypothermia Trial to Lessen Traumatic Brain Injury–Randomized Clinical Trial (POLAR-RCT) was a multicenter randomized trial in 6 countries that recruited 511 patients both out-of-hospital and in emergency departments after severe traumatic brain injury. The first patient was enrolled on December 5, 2010, and the last on November 10, 2017. The final date of follow-up was May 15, 2018.

Interventions  There were 266 patients randomized to the prophylactic hypothermia group and 245 to normothermic management. Prophylactic hypothermia targeted the early induction of hypothermia (33°C-35°C) for at least 72 hours and up to 7 days if intracranial pressures were elevated, followed by gradual rewarming. Normothermia targeted 37°C, using surface-cooling wraps when required. Temperature was managed in both groups for 7 days. All other care was at the discretion of the treating physician.

Main Outcomes and Measures  The primary outcome was favorable neurologic outcomes or independent living (Glasgow Outcome Scale–Extended score, 5-8 [scale range, 1-8]) obtained by blinded assessors 6 months after injury.

Results  Among 511 patients who were randomized, 500 provided ongoing consent (mean age, 34.5 years [SD, 13.4]; 402 men [80.2%]) and 466 completed the primary outcome evaluation. Hypothermia was initiated rapidly after injury (median, 1.8 hours [IQR, 1.0-2.7 hours]) and rewarming occurred slowly (median, 22.5 hours [IQR, 16-27 hours]). Favorable outcomes (Glasgow Outcome Scale–Extended score, 5-8) at 6 months occurred in 117 patients (48.8%) in the hypothermia group and 111 (49.1%) in the normothermia group (risk difference, 0.4% [95% CI, –9.4% to 8.7%]; relative risk with hypothermia, 0.99 [95% CI, 0.82-1.19]; P = .94). In the hypothermia and normothermia groups, the rates of pneumonia were 55.0% vs 51.3%, respectively, and rates of increased intracranial bleeding were 18.1% vs 15.4%, respectively.

Conclusions and Relevance  Among patients with severe traumatic brain injury, early prophylactic hypothermia compared with normothermia did not improve neurologic outcomes at 6 months. These findings do not support the use of early prophylactic hypothermia for patients with severe traumatic brain injury.




Petit Vt versus Vt intermédiaire chez les patients de réanimation sans SDRA




PReVENT Investigators, JAMA 2018


lien vers l’article




Importance  It remains uncertain whether invasive ventilation should use low tidal volumes in critically ill patients without acute respiratory distress syndrome (ARDS).

Objective  To determine whether a low tidal volume ventilation strategy is more effective than an intermediate tidal volume strategy.

Design, Setting, and Participants  A randomized clinical trial, conducted from September 1, 2014, through August 20, 2017, including patients without ARDS expected to not be extubated within 24 hours after start of ventilation from 6 intensive care units in the Netherlands.

Interventions  Invasive ventilation using low tidal volumes (n = 477) or intermediate tidal volumes (n = 484).

Main Outcomes and Measures  The primary outcome was the number of ventilator-free days and alive at day 28. Secondary outcomes included length of ICU and hospital stay; ICU, hospital, and 28- and 90-day mortality; and development of ARDS, pneumonia, severe atelectasis, or pneumothorax.

Results  In total, 961 patients (65% male), with a median age of 68 years (interquartile range [IQR], 59-76), were enrolled. At day 28, 475 patients in the low tidal volume group had a median of 21 ventilator-free days (IQR, 0-26), and 480 patients in the intermediate tidal volume group had a median of 21 ventilator-free days (IQR, 0-26) (mean difference, –0.27 [95% CI, –1.74 to 1.19] P = .71). There was no significant difference in ICU (median, 6 vs 6 days; 0.39 [–1.09 to 1.89] ; P = .58) and hospital (median, 14 vs 15 days; –0.60 [–3.52 to 2.31]; P = .68) length of stay or 28-day (34.9% vs 32.1%; hazard ratio [HR], 1.12 [0.90 to 1.40]; P = .30) and 90-day (39.1% vs 37.8%; HR, 1.07 [0.87 to 1.31]; P = .54) mortality. There was no significant difference in the percentage of patients developing the following adverse events: ARDS (3.8% vs 5.0%; risk ratio [RR], 0.86 [0.59 to 1.24]; P = .38), pneumonia (4.2% vs 3.7%; RR, 1.07 [0.78 to 1.47]; P = .67), severe atelectasis (11.4% vs 11.2%; RR, 1.00 [0.81 to 1.23]; P = .94), and pneumothorax (1.8% vs 1.3%; RR, 1.16 [0.73 to 1.84]; P = .55).

Conclusions and Relevance  In patients in the ICU without ARDS who were expected not to be extubated within 24 hours of randomization, a low tidal volume strategy did not result in a greater number of ventilator-free days than an intermediate tidal volume strategy.




Etude HIGH : Optiflow chez l’immunodéprimé ?



JAMA. Published online October 24, 2018.

lien vers l’article




Importance  High-flow nasal oxygen therapy is increasingly used for acute hypoxemic respiratory failure (AHRF).

Objective  To determine whether high-flow oxygen therapy decreases mortality among immunocompromised patients with AHRF compared with standard oxygen therapy.

Design, Setting, and Participants  The HIGH randomized clinical trial enrolled 776 adult immunocompromised patients with AHRF (Pao2 <60 mm Hg or Spo2 <90% on room air, or tachypnea >30/min or labored breathing or respiratory distress, and need for oxygen ≥6 L/min) at 32 intensive care units (ICUs) in France between May 19, 2016, and December 31, 2017.

Interventions  Patients were randomized 1:1 to continuous high-flow oxygen therapy (n = 388) or to standard oxygen therapy (n = 388).

Main Outcomes and Measures  The primary outcome was day-28 mortality. Secondary outcomes included intubation and mechanical ventilation by day 28, Pao2:Fio2 ratio over the 3 days after intubation, respiratory rate, ICU and hospital lengths of stay, ICU-acquired infections, and patient comfort and dyspnea.

Results  Of 778 randomized patients (median age, 64 [IQR, 54-71] years; 259 [33.3%] women), 776 (99.7%) completed the trial. At randomization, median respiratory rate was 33/min (IQR, 28-39) vs 32 (IQR, 27-38) and Pao2:Fio2 was 136 (IQR, 96-187) vs 128 (IQR, 92-164) in the intervention and control groups, respectively. Median SOFA score was 6 (IQR, 4-8) in both groups. Mortality on day 28 was not significantly different between groups (35.6% vs 36.1%; difference, −0.5% [95% CI, −7.3% to +6.3%]; hazard ratio, 0.98 [95% CI, 0.77 to 1.24]; P = .94). Intubation rate was not significantly different between groups (38.7% vs 43.8%; difference, −5.1% [95% CI, −12.3% to +2.0%]). Compared with controls, patients randomized to high-flow oxygen therapy had a higher Pao2:Fio2 (150 vs 119; difference, 19.5 [95% CI, 4.4 to 34.6]) and lower respiratory rate after 6 hours (25/min vs 26/min; difference, −1.8/min [95% CI, −3.2 to −0.2]). No significant difference was observed in ICU length of stay (8 vs 6 days; difference, 0.6 [95% CI, −1.0 to +2.2]), ICU-acquired infections (10.0% vs 10.6%; difference, −0.6% [95% CI, −4.6 to +4.1]), hospital length of stay (24 vs 27 days; difference, −2 days [95% CI, −7.3 to +3.3]), or patient comfort and dyspnea scores.

Conclusions and Relevance  Among critically ill immunocompromised patients with acute respiratory failure, high-flow oxygen therapy did not significantly decrease day-28 mortality compared with standard oxygen therapy.




Effet d’une phosphatase alcaline recombinante sur la clairance dans le sepsis : Etude STOP-ICU



Pickkers et al., JAMA, 2018

JAMA. 2018;320(19):1998-2009. doi:10.1001/jama.2018.14283

Importance  Sepsis-associated acute kidney injury (AKI) adversely affects long-term kidney outcomes and survival. Administration of the detoxifying enzyme alkaline phosphatase may improve kidney function and survival.

Objective  To determine the optimal therapeutic dose, effect on kidney function, and adverse effects of a human recombinant alkaline phosphatase in patients who are critically ill with sepsis-associated AKI.

Design, Setting, and Participants  The STOP-AKI trial was an international (53 recruiting sites), randomized, double-blind, placebo-controlled, dose-finding, adaptive phase 2a/2b study in 301 adult patients admitted to the intensive care unit with a diagnosis of sepsis and AKI. Patients were enrolled between December 2014 and May 2017, and follow-up was conducted for 90 days. The final date of follow-up was August 14, 2017.

Interventions  In the intention-to-treat analysis, in part 1 of the trial, patients were randomized to receive recombinant alkaline phosphatase in a dosage of 0.4 mg/kg (n = 31), 0.8 mg/kg (n = 32), or 1.6 mg/kg (n = 29) or placebo (n = 30), once daily for 3 days, to establish the optimal dose. The optimal dose was identified as 1.6 mg/kg based on modeling approaches and adverse events. In part 2, 1.6 mg/kg (n = 82) was compared with placebo (n = 86).

Main Outcomes and Measures  The primary end point was the time-corrected area under the curve of the endogenous creatinine clearance for days 1 through 7, divided by 7 to provide a mean daily creatinine clearance (AUC1-7 ECC). Incidence of fatal and nonfatal (serious) adverse events ([S]AEs) was also determined.

Results  Overall, 301 patients were enrolled (men, 70.7%; median age, 67 years [interquartile range {IQR}, 59-73]). From day 1 to day 7, median ECC increased from 26.0 mL/min (IQR, 8.8 to 59.5) to 65.4 mL/min (IQR, 26.7 to 115.4) in the recombinant alkaline phosphatase 1.6-mg/kg group vs from 35.9 mL/min (IQR, 12.2 to 82.9) to 61.9 mL/min (IQR, 22.7 to 115.2) in the placebo group (absolute difference, 9.5 mL/min [95% CI, −23.9 to 25.5]; P = .47). Fatal adverse events occurred in 26.3% of patients in the 0.4-mg/kg recombinant alkaline phosphatase group; 17.1% in the 0.8-mg/kg group, 17.4% in the 1.6-mg/kg group, and 29.5% in the placebo group. Rates of nonfatal SAEs were 21.0% for the 0.4-mg/kg recombinant alkaline phosphatase group, 14.3% for the 0.8-mg/kg group, 25.7% for the 1.6-mg/kg group, and 20.5% for the placebo group.

Conclusions and Relevance  Among patients who were critically ill with sepsis-associated acute kidney injury, human recombinant alkaline phosphatase compared with placebo did not significantly improve short-term kidney function. Further research is necessary to assess other clinical outcomes.



La simulation pour prévenir le stress au travail : OK pour les paraméd ?



Radia El Khamali, et al., SISTRESSREA Study Group, JAMA. Published online October 24, 2018.

lien vers l’article



Importance  Nurses working in an intensive care unit (ICU) are exposed to occupational stressors that can increase the risk of stress reactions, long-term absenteeism, and turnover.

Objective  To evaluate the effects of a program including simulation in reducing work-related stress and work-related outcomes among ICU nurses.

Design, Setting, and Participants  Multicenter randomized clinical trial performed at 8 adult ICUs in France from February 8, 2016, through April 29, 2017. A total of 198 ICU nurses were included and followed up for 1 year until April 30, 2018.

Interventions  The ICU nurses who had at least 6 months of ICU experience were randomized to the intervention group (n = 101) or to the control group (n = 97). The nurses randomized to the intervention group received a 5-day course involving a nursing theory recap and situational role-play using simulated scenarios (based on technical dexterity, clinical approach, decision making, aptitude to teamwork, and task prioritization), which were followed by debriefing sessions on attitude and discussion of practices.

Main Outcomes and Measures  The primary outcome was the prevalence of job strain assessed by combining a psychological demand score greater than 21 (score range, 9 [best] to 36 [worst]) with a decision latitude score less than 72 (score range, 24 [worst] to 96 [best]) using the Job Content Questionnaire and evaluated at 6 months. There were 7 secondary outcomes including absenteeism and turnover.

Results  Among 198 ICU nurses who were randomized (95 aged ≤30 years [48%] and 115 women [58%]), 182 (92%) completed the trial for the primary outcome. The trial was stopped for efficacy at the scheduled interim analysis after enrollment of 198 participants. The prevalence of job strain at 6 months was lower in the intervention group than in the control group (13% vs 67%, respectively; between-group difference, 54% [95% CI, 40%-64%]; P < .001). Absenteeism during the 6-month follow-up period was 1% in the intervention group compared with 8% in the control group (between-group difference, 7% [95% CI, 1%-15%]; P = .03). Four nurses (4%) from the intervention group left the ICU during the 6-month follow-up period compared with 12 nurses (12%) from the control group (between-group difference, 8% [95% CI, 0%-17%]; P = .04).

Conclusions and Relevance  Among ICU nurses, an intervention that included education, role-play, and debriefing resulted in a lower prevalence of job strain at 6 months compared with nurses who did not undergo this program. Further research is needed to understand which components of the program may have contributed to this result and to evaluate whether this program is cost-effective.






Analyse Bayésienne de l’ECMO précoce pour le SDRA sévère : bénéfice sur la mortalité ?








Importance  Bayesian analysis of clinical trial data may provide useful information to aid in study interpretation, especially when trial evidence suggests that the benefits of an intervention are uncertain, such as in a trial that evaluated early extracorporeal membrane oxygenation (ECMO) for severe acute respiratory distress syndrome (ARDS).

Objective  To demonstrate the potential utility of Bayesian analyses by estimating the posterior probability, under various assumptions, that early ECMO was associated with reduced mortality in patients with very severe ARDS in a randomized clinical trial (RCT).

Design and Evidence  A post hoc Bayesian analysis of data from an RCT (ECMO to Rescue Lung Injury in Severe ARDS [EOLIA]) that included 249 patients with very severe ARDS who had been randomized to receive early ECMO (n = 124; mortality at 60 days, 35%) vs initial conventional lung-protective ventilation with the option for rescue ECMO (n = 125, mortality at 60 days, 46%). The trial was designed to detect an absolute risk reduction (ARR) of 20%, relative risk (RR) of 0.67. Statistical prior distributions were specified to represent varying levels of preexisting enthusiasm or skepticism for ECMO and by Bayesian meta-analysis of previously published studies (with downweighting to account for differences and quality between studies). The RR, credible interval (CrI), ARR, and probability of clinically important mortality benefit (varying from RR less than 1 to RR less than 0.67 and ARR from 2% or more to 20% or more) were estimated with Bayesian modeling.

Findings  Combining a minimally informative prior distribution with the findings of the EOLIA trial, the posterior probability of RR less than 1 for mortality at 60 days after randomization was 96% (RR, 0.78 [95% CrI, 0.56-1.04]); the posterior probability of RR less than 0.67 was 18%, the probability of ARR of 2% or more was 92%, and the probability of ARR of 20% or more was 2%. With a moderately enthusiastic prior, equivalent to information from a trial of 264 patients with an RR of 0.78, the estimated RR was 0.78 (95% CrI, 0.63-0.96), the probability of RR less than 1 was 99%, the probability of RR less than 0.67 was 8%, the probability of ARR of 2% or more was 97%, and the probability of ARR of 20% or more was 0%. With a strongly skeptical prior, equivalent to information from a trial of 264 patients with an RR of 1.0, the estimated RR was 0.88 (95% CrI, 0.71-1.09), the probability of RR less than 1 was 88%, the probability of RR less than 0.67 was 0%, the probability of ARR of 2% or more was 78%, and the probability of ARR of 20% or more was 0%. If the prior was informed by previous studies, the estimated RR was 0.71 (95% CrI, 0.55-0.94), the probability of RR less than 1 was 99%, the probability of RR less than 0.67 was 48%, the probability of ARR of 2% or more was 98%, and the probability of ARR of 20% or more was 4%.

Conclusions and Relevance  Post hoc Bayesian analysis of data from a randomized clinical trial of early extracorporeal membrane oxygenation compared with conventional lung-protective ventilation with the option for rescue extracorporeal membrane oxygenation among patients with very severe acute respiratory distress syndrome provides information about the posterior probability of mortality benefit under a broad set of assumptions that may help inform interpretation of the study findings.



Décontamination digestive contre les bactériémies à BGN multi-résistants ?







Importance  The effects of chlorhexidine (CHX) mouthwash, selective oropharyngeal decontamination (SOD), and selective digestive tract decontamination (SDD) on patient outcomes in ICUs with moderate to high levels of antibiotic resistance are unknown.

Objective  To determine associations between CHX 2%, SOD, and SDD and the occurrence of ICU-acquired bloodstream infections with multidrug-resistant gram-negative bacteria (MDRGNB) and 28-day mortality in ICUs with moderate to high levels of antibiotic resistance.

Design, Setting, and Participants  Randomized trial conducted from December 1, 2013, to May 31, 2017, in 13 European ICUs where at least 5% of bloodstream infections are caused by extended-spectrum β-lactamase–producing Enterobacteriaceae. Patients with anticipated mechanical ventilation of more than 24 hours were eligible. The final date of follow-up was September 20, 2017.

Interventions  Standard care was daily CHX 2% body washings and a hand hygiene improvement program. Following a baseline period from 6 to 14 months, each ICU was assigned in random order to 3 separate 6-month intervention periods with either CHX 2% mouthwash, SOD (mouthpaste with colistin, tobramycin, and nystatin), or SDD (the same mouthpaste and gastrointestinal suspension with the same antibiotics), all applied 4 times daily.

Main Outcomes and Measures  The occurrence of ICU-acquired bloodstream infection with MDRGNB (primary outcome) and 28-day mortality (secondary outcome) during each intervention period compared with the baseline period.

Results  A total of 8665 patients (median age, 64.1 years; 5561 men [64.2%]) were included in the study (2251, 2108, 2224, and 2082 in the baseline, CHX, SOD, and SDD periods, respectively). ICU-acquired bloodstream infection with MDRGNB occurred among 144 patients (154 episodes) in 2.1%, 1.8%, 1.5%, and 1.2% of included patients during the baseline, CHX, SOD, and SDD periods, respectively. Absolute risk reductions were 0.3% (95% CI, −0.6% to 1.1%), 0.6% (95% CI, −0.2% to 1.4%), and 0.8% (95% CI, 0.1% to 1.6%) for CHX, SOD, and SDD, respectively, compared with baseline. Adjusted hazard ratios were 1.13 (95% CI, 0.68-1.88), 0.89 (95% CI, 0.55-1.45), and 0.70 (95% CI, 0.43-1.14) during the CHX, SOD, and SDD periods, respectively, vs baseline. Crude mortality risks on day 28 were 31.9%, 32.9%, 32.4%, and 34.1% during the baseline, CHX, SOD, and SDD periods, respectively. Adjusted odds ratios for 28-day mortality were 1.07 (95% CI, 0.86-1.32), 1.05 (95% CI, 0.85-1.29), and 1.03 (95% CI, 0.80-1.32) for CHX, SOD, and SDD, respectively, vs baseline.

Conclusions and Relevance  Among patients receiving mechanical ventilation in ICUs with moderate to high antibiotic resistance prevalence, use of CHX mouthwash, SOD, or SDD was not associated with reductions in ICU-acquired bloodstream infections caused by MDRGNB compared with standard care.



Effet de perfusion de Polymyxine B ciblée dans le choc septique ?



Dellinger, et al. The EUPHRATES Randomized Clinical Trial. JAMA. 2018;320(14):1455-1463




Importance  Polymyxin B hemoperfusion reduces blood endotoxin levels in sepsis. Endotoxin activity can be measured in blood with a rapid assay. Treating patients with septic shock and elevated endotoxin activity using polymyxin B hemoperfusion may improve clinical outcomes.

Objective  To test whether adding polymyxin B hemoperfusion to conventional medical therapy improves survival compared with conventional therapy alone among patients with septic shock and high endotoxin activity.

Design, Setting, and Participants  Multicenter, randomized clinical trial involving 450 adult critically ill patients with septic shock and an endotoxin activity assay level of 0.60 or higher enrolled between September 2010 and June 2016 at 55 tertiary hospitals in North America. Last follow-up was June 2017.

Interventions  Two polymyxin B hemoperfusion treatments (90-120 minutes) plus standard therapy completed within 24 hours of enrollment (n = 224 patients) or sham hemoperfusion plus standard therapy (n = 226 patients).

Main Outcomes and Measures  The primary outcome was mortality at 28 days among all patients randomized (all participants) and among patients randomized with a multiple organ dysfunction score (MODS) of more than 9.

Results  Among 450 eligible enrolled patients (mean age, 59.8 years; 177 [39.3%] women; mean APACHE II score 29.4 [range, 0-71 with higher scores indicating greater severity), 449 (99.8%) completed the study. Polymyxin B hemoperfusion was not associated with a significant difference in mortality at 28 days among all participants (treatment group, 84 of 223 [37.7%] vs sham group 78 of 226 [34.5%]; risk difference [RD], 3.2%; 95% CI, −5.7% to 12.0%; relative risk [RR], 1.09; 95% CI, 0.85-1.39; P = .49) or in the population with a MODS of more than 9 (treatment group, 65 of 146 [44.5%] vs sham, 65 of 148 [43.9%]; RD, 0.6%; 95% CI, −10.8% to 11.9%; RR, 1.01; 95% CI, 0.78-1.31; P = .92). Overall, 264 serious adverse events were reported (65.1% treatment group vs 57.3% sham group). The most frequent serious adverse events were worsening of sepsis (10.8% treatment group vs 9.1% sham group) and worsening of septic shock (6.6% treatment group vs 7.7% sham group).

Conclusions and Relevance  Among patients with septic shock and high endotoxin activity, polymyxin B hemoperfusion treatment plus conventional medical therapy compared with sham treatment plus conventional medical therapy did not reduce mortality at 28 days.








Moins d’insuffisance rénale post-opératoire avec le NO inhalé en chirurgie cardiaque ?


Berra et al., AJRCCM 2018


DOI: 10.1164/rccm.201710-2150OC


Rationale: No medical intervention has been identified that decreases acute kidney injury and improves renal outcome at 1 year after cardiac surgery.
Objectives: To determine whether administration of nitric oxide reduces the incidence of postoperative acute kidney injury and improves long-term kidney outcomes after multiple cardiac valve replacement requiring prolonged cardiopulmonary bypass.
Methods: Two hundred and forty-four patients undergoing elective, multiple valve replacement surgery, mostly due to rheumatic fever, were randomized to receive either nitric oxide (treatment) or nitrogen (control). Nitric oxide and nitrogen were administered via the gas exchanger during cardiopulmonary bypass and by inhalation for 24 hours postoperatively.
Measurements and Main Results: The primary outcome was as follows: oxidation of ferrous plasma oxyhemoglobin to ferric methemoglobin was associated with reduced postoperative acute kidney injury from 64% (control group) to 50% (nitric oxide group) (relative risk [RR], 0.78; 95% confidence interval [CI], 0.62–0.97; P = 0.014). Secondary outcomes were as follows: at 90 days, transition to stage 3 chronic kidney disease was reduced from 33% in the control group to 21% in the treatment group (RR, 0.64; 95% CI, 0.41–0.99; P = 0.024) and at 1 year, from 31% to 18% (RR, 0.59; 95% CI, 0.36–0.96; P = 0.017). Nitric oxide treatment reduced the overall major adverse kidney events at 30 days (RR, 0.40; 95% CI, 0.18–0.92; P = 0.016), 90 days (RR, 0.40; 95% CI, 0.17–0.92; P = 0.015), and 1 year (RR, 0.47; 95% CI, 0.20–1.10; P = 0.041).
Conclusions: In patients undergoing multiple valve replacement and prolonged cardiopulmonary bypass, administration of nitric oxide decreased the incidence of acute kidney injury, transition to stage 3 chronic kidney disease, and major adverse kidney events at 30 days, 90 days, and 1 year.



Place de la ventilation en APRV dans le SDRA chez l’enfant


Ganesan et al., AJRCCM 2018


DOI: 10.1164/rccm.201705-0989OC


Rationale: Although case series describe benefits of airway pressure release ventilation (APRV), this mode of ventilation has not been evaluated against the conventional low–tidal volume ventilation (LoTV) in children with acute respiratory distress syndrome (ARDS).
Objectives: To compare the effect of APRV and conventional LoTV on ventilator-free days in children with ARDS.
Methods: This open-label, parallel-design randomized controlled trial was conducted in a 15-bed ICU. Children aged 1 month to 12 years satisfying the modified Berlin definition were included. We excluded children with air leaks, increased intracranial pressure, poor spontaneous breathing efforts, chronic lung disease, and beyond 24 hours of ARDS diagnosis or 72 hours of ventilation. Children were randomized using unstratified, variable-sized block technique. A priori interim analysis was planned at 50% enrollment. All enrolled children were followed up until 180 days after enrollment or death, whichever was earlier.
Measurements and Main Results: The trial was terminated after 50% enrollment (52 children) when analysis revealed higher mortality in the intervention arm. Ventilator-free days were statistically similar in both arms (P = 0.23). The 28-day all-cause mortality was 53.8% in APRV as compared with 26.9% among control subjects (risk ratio, 2.0; 95% confidence interval, 0.97–4.1; Fisher exact P = 0.089). The multivariate-adjusted risk ratio of death for APRV compared with LoTV was 2.02 (95% confidence interval, 0.99–4.12; P = 0.05). Higher mean airway pressures, greater spontaneous breathing, and early improvement in oxygenation were seen in the intervention arm.
Conclusions: APRV, as a primary ventilation strategy in children with ARDS, was associated with a trend toward higher mortality compared with the conventional LoTV. Limitations should be considered while interpreting these results.


Le rôle délétère des épisodes de déventilation dans le SDRA chez le rat


Katira et al., AJRCCM 2018


DOI: 10.1164/rccm.201801-0178OC


Rationale: Ventilator management in acute respiratory distress syndrome usually focuses on setting parameters, but events occurring at ventilator disconnection are not well understood.
Objectives: To determine if abrupt deflation after sustained inflation causes lung injury.
Methods: Male Sprague-Dawley rats were ventilated (low Vt, 6 ml/kg) and randomized to control (n = 6; positive end-expiratory pressure [PEEP], 3 cm H2O; 100 min) or intervention (n = 6; PEEP, 3–11 cm H2O over 70 min; abrupt deflation to zero PEEP; ventilation for 30 min). Lung function and injury was assessed, scanning electron microscopy performed, and microvascular leak timed by Evans blue dye (n = 4/group at 0, 2, 5, 10, and 20 min after deflation). Hemodynamic assessment included systemic arterial pressure (n = 6), echocardiography (n = 4), and right (n = 6) and left ventricular pressures (n = 6).
Measurements and Main Results: Abrupt deflation after sustained inflation (vs. control) caused acute lung dysfunction (compliance 0.48 ± 1.0 vs. 0.82 ± 0.2 m/cm H2O, oxygen saturation as measured by pulse oximetry 67 ± 23.5 vs. 91 ± 4.4%; P < 0.05) and injury (wet/dry ratio 6.1 ± 0.6 vs. 4.6 ± 0.4; P < 0.01). Vascular leak was absent before deflation and maximal 5–10 minutes thereafter; injury was predominantly endothelial. At deflation, left ventricular preload, systemic blood pressure, and left ventricular end-diastolic pressure increased precipitously in proportion to the degree of injury. Injury caused later right ventricular failure. Sodium nitroprusside prevented the increase in systemic blood pressure and left ventricular end-diastolic pressure associated with deflation, and prevented injury. Injury did not occur with gradual deflation.
Conclusions: Abrupt deflation after sustained inflation can cause acute lung injury. It seems to be mediated by acute left ventricular decompensation (caused by increased left ventricular preload and afterload) that elevates pulmonary microvascular pressure; this directly injures the endothelium and causes edema, which is potentiated by the surge in pulmonary perfusion.



Paramètres de PEEP personnalisés en per-opératoire pour moins d’atélectasie


Sérgio M. Pereira; et al, Anesthesiology, 2018




Background: Intraoperative lung-protective ventilation has been recommended to reduce postoperative pulmonary complications after abdominal surgery. Although the protective role of a more physiologic tidal volume has been established, the added protection afforded by positive end-expiratory pressure (PEEP) remains uncertain. The authors hypothesized that a low fixed PEEP might not fit all patients and that an individually titrated PEEP during anesthesia might improve lung function during and after surgery.

Methods: Forty patients were studied in the operating room (20 laparoscopic and 20 open-abdominal). They underwent elective abdominal surgery and were randomized to institutional PEEP (4 cm H2O) or electrical impedance tomography–guided PEEP (applied after recruitment maneuvers and targeted at minimizing lung collapse and hyperdistension, simultaneously). Patients were extubated without changing selected PEEP or fractional inspired oxygen tension while under anesthesia and submitted to chest computed tomography after extubation. Our primary goal was to individually identify the electrical impedance tomography–guided PEEP value producing the best compromise of lung collapse and hyperdistention.

Results: Electrical impedance tomography–guided PEEP varied markedly across individuals (median, 12 cm H2O; range, 6 to 16 cm H2O; 95% CI, 10–14). Compared with PEEP of 4 cm H2O, patients randomized to the electrical impedance tomography–guided strategy had less postoperative atelectasis (6.2 ± 4.1 vs. 10.8 ± 7.1% of lung tissue mass; P = 0.017) and lower intraoperative driving pressures (mean values during surgery of 8.0 ± 1.7 vs. 11.6 ± 3.8 cm H2O; P < 0.001). The electrical impedance tomography–guided PEEP arm had higher intraoperative oxygenation (435 ± 62 vs. 266 ± 76 mmHg for laparoscopic group; P < 0.001), while presenting equivalent hemodynamics (mean arterial pressure during surgery of 80 ± 14 vs. 78 ± 15 mmHg; P = 0.821).

Conclusions: PEEP requirements vary widely among patients receiving protective tidal volumes during anesthesia for abdominal surgery. Individualized PEEP settings could reduce postoperative atelectasis (measured by computed tomography) while improving intraoperative oxygenation and driving pressures, causing minimum side effects.



Délai avant la reprise des beta-bloquants en post-opératoire ?




Ashish K. Khanna, et al, Anesthesiology, 2018





Background: Beta (β) blockers reduce the risk of postoperative atrial fibrillation and should be restarted after surgery, but it remains unclear when best to resume β blockers postoperatively. The authors thus evaluated the relationship between timing of resumption of β blockers and atrial fibrillation in patients recovering from noncardiothoracic and nonvascular surgery.

Methods: The authors evaluated 8,201 adult β-blocker users with no previous history of atrial fibrillation who stayed at least two nights after noncardiothoracic and nonvascular surgery as a retrospective observational cohort. After propensity score matching on baseline and intraoperative variables, 1,924 patients who did resume β blockers by the end of postoperative day 1 were compared with 973 patients who had not resumed by that time on postoperative atrial fibrillation using logistic regression. A secondary matched analysis compared 3,198 patients who resumed β blockers on the day of surgery with 3,198 who resumed thereafter.

Results: Of propensity score–matched patients who resumed β blockers by end of postoperative day 1, 4.9% (94 of 1,924) developed atrial fibrillation, compared with 7.0% (68 of 973) of those who resumed thereafter (adjusted odds ratio, 0.69; 95% CI, 0.50–0.95; P = 0.026). Patients who resumed β blockers on day of surgery had an atrial fibrillation incidence of 4.9% versus 5.8% for those who started thereafter (odds ratio, 0.84; 95% CI, 0.67–1.04; P = 0.104).

Conclusions: Resuming β blockers in chronic users by the end of the first postoperative day may be associated with lower odds of in-hospital atrial fibrillation. However, there seems to be little advantage to restarting on the day of surgery itself.



Score pour estimer le risque neurologique en post-opératoire de craniotomie ?



Raphaël Cinotti,et al., Anesthesiology,2018






Background: Craniotomy for brain tumor displays significant morbidity and mortality, and no score is available to discriminate high-risk patients. Our objective was to validate a prediction score for postoperative neurosurgical complications in this setting.

Methods: Creation of a score in a learning cohort from a prospective specific database of 1,094 patients undergoing elective brain tumor craniotomy in one center from 2008 to 2012. The validation cohort was validated in a prospective multicenter independent cohort of 830 patients from 2013 to 2015 in six university hospitals in France. The primary outcome variable was postoperative neurologic complications requiring in–intensive care unit management (intracranial hypertension, intracranial bleeding, status epilepticus, respiratory failure, impaired consciousness, unexpected motor deficit). The least absolute shrinkage and selection operator method was used for potential risk factor selection with logistic regression.

Results: Severe complications occurred in 125 (11.4%) and 90 (10.8%) patients in the learning and validation cohorts, respectively. The independent risk factors for severe complications were related to the patient (Glasgow Coma Score before surgery at or below 14, history of brain tumor surgery), tumor characteristics (greatest diameter, cerebral midline shift at least 3 mm), and perioperative management (transfusion of blood products, maximum and minimal systolic arterial pressure, duration of surgery). The positive predictive value of the score at or below 3% was 12.1%, and the negative predictive value was 100% in the learning cohort. In–intensive care unit mortality was observed in eight (0.7%) and six (0.7%) patients in the learning and validation cohorts, respectively.

Conclusions: The validation of prediction scores is the first step toward on-demand intensive care unit admission. Further research is needed to improve the score’s performance before routine use.



Spécial Intelligence Artificielle :


– Prédiction d’hypotension artérielle



Feras Hatib,et al., Anesthesiology, 2018


Background: With appropriate algorithms, computers can learn to detect patterns and associations in large data sets. The authors’ goal was to apply machine learning to arterial pressure waveforms and create an algorithm to predict hypotension. The algorithm detects early alteration in waveforms that can herald the weakening of cardiovascular compensatory mechanisms affecting preload, afterload, and contractility.

Methods: The algorithm was developed with two different data sources: (1) a retrospective cohort, used for training, consisting of 1,334 patients’ records with 545,959 min of arterial waveform recording and 25,461 episodes of hypotension; and (2) a prospective, local hospital cohort used for external validation, consisting of 204 patients’ records with 33,236 min of arterial waveform recording and 1,923 episodes of hypotension. The algorithm relates a large set of features calculated from the high-fidelity arterial pressure waveform to the prediction of an upcoming hypotensive event (mean arterial pressure < 65 mmHg). Receiver-operating characteristic curve analysis evaluated the algorithm’s success in predicting hypotension, defined as mean arterial pressure less than 65 mmHg.

Results: Using 3,022 individual features per cardiac cycle, the algorithm predicted arterial hypotension with a sensitivity and specificity of 88% (85 to 90%) and 87% (85 to 90%) 15 min before a hypotensive event (area under the curve, 0.95 [0.94 to 0.95]); 89% (87 to 91%) and 90% (87 to 92%) 10 min before (area under the curve, 0.95 [0.95 to 0.96]); 92% (90 to 94%) and 92% (90 to 94%) 5 min before (area under the curve, 0.97 [0.97 to 0.98]).

Conclusions: The results demonstrate that a machine-learning algorithm can be trained, with large data sets of high-fidelity arterial waveforms, to predict hypotension in surgical patients’ records.



– Prédiction de la mortalité post-opératoire


Christine K. Lee,et al., Anesthesiology, 2018

Background: The authors tested the hypothesis that deep neural networks trained on intraoperative features can predict postoperative in-hospital mortality.

Methods: The data used to train and validate the algorithm consists of 59,985 patients with 87 features extracted at the end of surgery. Feed-forward networks with a logistic output were trained using stochastic gradient descent with momentum. The deep neural networks were trained on 80% of the data, with 20% reserved for testing. The authors assessed improvement of the deep neural network by adding American Society of Anesthesiologists (ASA) Physical Status Classification and robustness of the deep neural network to a reduced feature set. The networks were then compared to ASA Physical Status, logistic regression, and other published clinical scores including the Surgical Apgar, Preoperative Score to Predict Postoperative Mortality, Risk Quantification Index, and the Risk Stratification Index.

Results: In-hospital mortality in the training and test sets were 0.81% and 0.73%. The deep neural network with a reduced feature set and ASA Physical Status classification had the highest area under the receiver operating characteristics curve, 0.91 (95% CI, 0.88 to 0.93). The highest logistic regression area under the curve was found with a reduced feature set and ASA Physical Status (0.90, 95% CI, 0.87 to 0.93). The Risk Stratification Index had the highest area under the receiver operating characteristics curve, at 0.97 (95% CI, 0.94 to 0.99).

Conclusions: Deep neural networks can predict in-hospital mortality based on automatically extractable intraoperative data, but are not (yet) superior to existing methods.


– Prédiction d’hypotension artérielle post-induction


Samir Kendale,et al., Anesthesiology, 2018



Background: Hypotension is a risk factor for adverse perioperative outcomes. Machine-learning methods allow large amounts of data for development of robust predictive analytics. The authors hypothesized that machine-learning methods can provide prediction for the risk of postinduction hypotension.

Methods: Data was extracted from the electronic health record of a single quaternary care center from November 2015 to May 2016 for patients over age 12 that underwent general anesthesia, without procedure exclusions. Multiple supervised machine-learning classification techniques were attempted, with postinduction hypotension (mean arterial pressure less than 55 mmHg within 10 min of induction by any measurement) as primary outcome, and preoperative medications, medical comorbidities, induction medications, and intraoperative vital signs as features. Discrimination was assessed using cross-validated area under the receiver operating characteristic curve. The best performing model was tuned and final performance assessed using split-set validation.

Results: Out of 13,323 cases, 1,185 (8.9%) experienced postinduction hypotension. Area under the receiver operating characteristic curve using logistic regression was 0.71 (95% CI, 0.70 to 0.72), support vector machines was 0.63 (95% CI, 0.58 to 0.60), naive Bayes was 0.69 (95% CI, 0.67 to 0.69), k-nearest neighbor was 0.64 (95% CI, 0.63 to 0.65), linear discriminant analysis was 0.72 (95% CI, 0.71 to 0.73), random forest was 0.74 (95% CI, 0.73 to 0.75), neural nets 0.71 (95% CI, 0.69 to 0.71), and gradient boosting machine 0.76 (95% CI, 0.75 to 0.77). Test set area for the gradient boosting machine was 0.74 (95% CI, 0.72 to 0.77).

Conclusions: The success of this technique in predicting postinduction hypotension demonstrates feasibility of machine-learning models for predictive analytics in the field of anesthesiology, with performance dependent on model selection and appropriate tuning.





Composition de l’équipe d’Anesthésie : Médecin en salle inutile ?



Eric C. Sun,et al, Anesthesiology ,2018




Background: In the United States, anesthesia care can be provided by an anesthesia care team consisting of nonphysician providers (nurse anesthetists and anesthesiologist assistants) working under the supervision of a physician anesthesiologist. Nurse anesthetists may practice nationwide, whereas anesthesiologist assistants are restricted to 16 states. To inform policies concerning the expanded use of anesthesiologist assistants, the authors examined whether the specific anesthesia care team composition (physician anesthesiologist plus nurse anesthetist or anesthesiologist assistant) was associated with differences in perioperative outcomes.

Methods: A retrospective analysis was performed of national claims data for 443,098 publicly insured elderly (ages 65 to 89 yr) patients who underwent inpatient surgery between January 1, 2004, and December 31, 2011. The differences in inpatient mortality, spending, and length of stay between cases where an anesthesiologist supervised an anesthesiologist assistant compared to cases where an anesthesiologist supervised a nurse anesthetist were estimated. The approach used a quasirandomization technique known as instrumental variables to reduce confounding.

Results: The adjusted mortality for care teams with anesthesiologist assistants was 1.6% (95% CI, 1.4 to 1.8) versus 1.7% for care teams with nurse anesthetists (95% CI, 1.7 to 1.7; difference −0.08; 95% CI, −0.3 to 0.1; P = 0.47). Compared to care teams with nurse anesthetists, care teams with anesthesiologist assistants were associated with non–statistically significant decreases in length of stay (−0.009 days; 95% CI, −0.1 to 0.1; P = 0.89) and medical spending (−$56; 95% CI, −334 to 223; P = 0.70).

Conclusions: The specific composition of the anesthesia care team was not associated with any significant differences in mortality, length of stay, or inpatient spending.



ALR : Echec de la Bupivacaïne liposomale ?


Lukas Pichler,et al., Anesthesiology, 2018




Background: Although some trials suggest benefits of liposomal bupivacaine, data on real-world use and effectiveness is lacking. This study analyzed the impact of liposomal bupivacaine use (regardless of administration route) on inpatient opioid prescription, resource utilization, and opioid-related complications among patients undergoing total knee arthroplasties with a peripheral nerve block. It was hypothesized that liposomal bupivacaine has limited clinical influence on the studied outcomes.

Methods: The study included data on 88,830 total knee arthroplasties performed with a peripheral nerve block (Premier Healthcare Database 2013 to 2016). Multilevel multivariable regressions measured associations between use of liposomal bupivacaine and (1) inpatient opioid prescription (extracted from billing) and (2) length of stay, cost of hospitalization, as well as opioid-related complications. To reflect the difference between statistical and clinical significance, a relative change of −15% in outcomes was assumed to be clinically important.

Results: Overall, liposomal bupivacaine was used in 21.2% (n = 18,817) of patients that underwent a total knee arthroplasty with a peripheral nerve block. Liposomal bupivacaine use was not associated with a clinically meaningful reduction in inpatient opioid prescription (group median, 253 mg of oral morphine equivalents, adjusted effect −9.3% CI −11.1%, −7.5%; P < 0.0001) and length of stay (group median, 3 days, adjusted effect −8.8% CI −10.1%, −7.5%; P < 0.0001) with no effect on cost of hospitalization. Most importantly, liposomal bupivacaine use was not associated with decreased odds for opioid-related complications.

Conclusions: Liposomal bupivacaine was not associated with a clinically relevant improvement in inpatient opioid prescription, resource utilization, or opioid-related complications in patients who received modern pain management including a peripheral nerve block.




Prévention des syndromes dépressifs en post-opératoire avec la Kétamine ?


Mashour et al., BJA 2018


DOI: 10.1016/j.bja.2018.03.030


Ketamine is a general anaesthetic with anti-depressant effects at subanaesthetic doses. We hypothesised that intraoperative administration of ketamine would prevent or mitigate postoperative depressive symptoms in surgical patients.
We conducted an international, randomised clinical trial testing the effects of intraoperative administration of ketamine [0.5 mg kg−1 (Lo-K) or 1.0 mg kg−1 (Hi-K)] vs control [saline placebo (P)] in patients ≥60 yr old undergoing major surgery with general anaesthesia. We administered the Patient Health Questionnaire-8 before the operation, on postoperative day (POD) 3 (primary outcome), and on POD30 to assess depressive symptoms, a secondary outcome of the original trial.
There was no significant difference on POD3 in the proportion of patients with symptoms suggestive of depression between the placebo [23/156 (14.7%)] and combined ketamine (Lo-K plus Hi-K) [61/349 (17.5%)] groups [difference = –2.7%; 95% confidence interval (CI), 5.0% to –9.4%; P=0.446]. Of the total cohort, 9.6% (64/670; 95% CI, 7.6–12.0%) had symptoms suggestive of depression before operation, which increased to 16.6% (84/505; 95% CI, 13.6–20.1%) on POD3, and decreased to 11.9% (47/395; 95% CI, 9.1–15.5%) on POD30. Of the patients with depressive symptoms on POD3 and POD30, 51% and 49%, respectively, had no prior history of depression or depressive symptoms.
Major surgery is associated with new-onset symptoms suggestive of depression in patients ≥60 yr old. Intraoperative administration of subanaesthetic ketamine does not appear to prevent or improve depressive symptoms.

Diminution des doses de Propofol avec la Lidocaine IV durant l’anesthésie générale pour coloscopie

Forset et al., BJA 2018
DOI: 10.1016/j.bja.2018.06.019
Propofol use during sedation for colonoscopy can result in cardiopulmonary complications. Intravenous lidocaine can alleviate visceral pain and decrease propofol requirements during surgery. We tested the hypothesis that i.v. lidocaine reduces propofol requirements during colonoscopy and improves post-colonoscopy recovery.
Forty patients undergoing colonoscopy were included in this randomised placebo-controlled study. After titration of propofol to produce unconsciousness, patients were given i.v. lidocaine (1.5 mg kg−1 then 4 mg kg−1 h−1) or the same volume of saline. Sedation was standardised and combined propofol and ketamine. The primary endpoint was propofol requirements. Secondary endpoints were: number of oxygen desaturation episodes, endoscopists’ working conditions, discharge time to the recovery room, post-colonoscopy pain, fatigue.
Lidocaine infusion resulted in a significant reduction in propofol requirements: 58 (47) vs 121 (109) mg (P=0.02). Doses of ketamine were similar in the two groups: 19 (2) vs 20 (3) mg in the lidocaine and saline groups, respectively. Number of episodes of oxygen desaturation, endoscopists’ comfort, and times for discharge to the recovery room were similar in both groups. Post-colonoscopy pain (P<0.01) and fatigue (P=0.03) were significantly lower in the lidocaine group.
Intravenous infusion of lidocaine resulted in a 50% reduction in propofol dose requirements during colonoscopy. Immediate post-colonoscopy pain and fatigue were also improved by lidocaine.


Facteurs de risque d’agitation en salle de réveil


Fields et al., BJA 2018


DOI: 10.1016/j.bja.2018.07.017



Mécanique ventilatoire durant l’anesthésie générale

Gireco et al., BJA 2018
DOI: 10.1016/j.bja.2018.03.022




Safety de l’Albumine 20% ?



Må al., ICM, 2018

lien vers l’article



We set out to assess the resuscitation fluid requirements and physiological and clinical responses of intensive care unit (ICU) patients resuscitated with 20% albumin versus 4–5% albumin.


We performed a randomised controlled trial in 321 adult patients requiring fluid resuscitation within 48 h of admission to three ICUs in Australia and the UK.


The cumulative volume of resuscitation fluid at 48 h (primary outcome) was lower in the 20% albumin group than in the 4–5% albumin group [median difference − 600 ml, 95% confidence interval (CI) − 800 to − 400; P < 0.001]. The 20% albumin group had lower cumulative fluid balance at 48 h (mean difference − 576 ml, 95% CI − 1033 to − 119; P = 0.01). Peak albumin levels were higher but sodium and chloride levels lower in the 20% albumin group. Median (interquartile range) duration of mechanical ventilation was 12.0 h (7.6, 33.1) in the 20% albumin group and 15.3 h (7.7, 58.1) in the 4–5% albumin group (P = 0.13); the proportion of patients commenced on renal replacement therapy after randomization was 3.3% and 4.2% (P = 0.67), respectively, and the proportion discharged alive from ICU was 97.4% and 91.1% (P = 0.02).


Resuscitation with 20% albumin decreased resuscitation fluid requirements, minimized positive early fluid balance and was not associated with any evidence of harm compared with 4–5% albumin. These findings support the safety of further exploration of resuscitation with 20% albumin in larger randomised trials.


Optiflow chez l’enfant : 2L/kg/min suffit ?


Milési et al., ICM, 2018

Lien vers l’article



High-flow nasal cannula (HFNC) therapy is increasingly proposed as first-line respiratory support for infants with acute viral bronchiolitis (AVB). Most teams use 2 L/kg/min, but no study compared different flow rates in this setting. We hypothesized that 3 L/kg/min would be more efficient for the initial management of these patients.


A randomized controlled trial was performed in 16 pediatric intensive care units (PICUs) to compare these two flow rates in infants up to 6 months old with moderate to severe AVB and treated with HFNC. The primary endpoint was the percentage of failure within 48 h of randomization, using prespecified criteria of worsening respiratory distress and discomfort.


From November 2016 to March 2017, 142 infants were allocated to the 2-L/kg/min (2L) flow rate and 144 to the 3-L/kg/min (3L) flow rate. Failure rate was comparable between groups: 38.7% (2L) vs. 38.9% (3L; p = 0.98). Worsening respiratory distress was the most common cause of failure in both groups: 49% (2L) vs. 39% (3L; p = 0.45). In the 3L group, discomfort was more frequent (43% vs. 16%, p = 0.002) and PICU stays were longer (6.4 vs. 5.3 days, p = 0.048). The intubation rates [2.8% (2L) vs. 6.9% (3L), p = 0.17] and durations of invasive [0.2 (2L) vs. 0.5 (3L) days, p = 0.10] and noninvasive [1.4 (2L) vs. 1.6 (3L) days, p = 0.97] ventilation were comparable. No patient had air leak syndrome or died.


In young infants with AVB supported with HFNC, 3 L/kg/min did not reduce the risk of failure compared with 2 L/kg/min.



Etude sur les différentes cibles d’hypothermie dans l’ACR extra-hospitalier


Lopez-de-Sa. et al., the FROST-I trial. Intensive Care Med 2018

Lien vers l’article




To obtain initial data on the effect of different levels of targeted temperature management (TTM) in out-of-hospital cardiac arrest (OHCA).


We designed a multicentre pilot trial with 1:1:1 randomization to either 32 °C (n = 52), 33 °C (n = 49) or 34 °C (n = 49), via endovascular cooling devices during a 24-h period in comatose survivors of witnessed OHCA and initial shockable rhythm. The primary endpoint was the percentage of subjects surviving with good neurologic outcome defined by a modified Rankin Scale (mRS) score of ≤ 3, blindly assessed at 90 days.


At baseline, different proportions of patients who had received defibrillation administered by a bystander were assigned to groups of 32 °C (13.5%), 33 °C (34.7%) and 34 °C (28.6%; p = 0.03). The percentage of patients with an mRS ≤ 3 at 90 days (primary endpoint) was 65.3, 65.9 and 65.9% in patients assigned to 32, 33 and 34 °C, respectively, non-significant (NS). The multivariate Cox proportional hazards model identified two variables significantly related to the primary outcome: male gender and defibrillation by a bystander. Among the 43 patients who died before 90 days, 28 died following withdrawal of life-sustaining therapy, as follows: 7/16 (43.8%), 10/13 (76.9%) and 11/14 (78.6%) of patients assigned to 32, 33 and 34 °C, respectively (trend test p = 0.04). All levels of cooling were well tolerated.


There were no statistically significant differences in neurological outcomes among the different levels of TTM. However, future research should explore the efficacy of TTM at 32 °C.



Cibler des objectifs différents de PaO2 et PaCO2 en post-ACR ?



Pekka Jakkula et al., ICM, 2018




We assessed the effects of targeting low-normal or high-normal arterial carbon dioxide tension (PaCO2) and normoxia or moderate hyperoxia after out-of-hospital cardiac arrest (OHCA) on markers of cerebral and cardiac injury.


Using a 23 factorial design, we randomly assigned 123 patients resuscitated from OHCA to low-normal (4.5–4.7 kPa) or high-normal (5.8–6.0 kPa) PaCO2 and to normoxia (arterial oxygen tension [PaO2] 10–15 kPa) or moderate hyperoxia (PaO2 20–25 kPa) and to low-normal or high-normal mean arterial pressure during the first 36 h in the intensive care unit. Here we report the results of the low-normal vs. high-normal PaCO2 and normoxia vs. moderate hyperoxia comparisons. The primary endpoint was the serum concentration of neuron-specific enolase (NSE) 48 h after cardiac arrest. Secondary endpoints included S100B protein and cardiac troponin concentrations, continuous electroencephalography (EEG) and near-infrared spectroscopy (NIRS) results and neurologic outcome at 6 months.


In total 120 patients were included in the analyses. There was a clear separation in PaCO2 (p < 0.001) and PaO2 (p < 0.001) between the groups. The median (interquartile range) NSE concentration at 48 h was 18.8 µg/l (13.9–28.3 µg/l) in the low-normal PaCO2 group and 22.5 µg/l (14.2–34.9 µg/l) in the high-normal PaCO2 group, p = 0.400; and 22.3 µg/l (14.8–27.8 µg/l) in the normoxia group and 20.6 µg/l (14.2–34.9 µg/l) in the moderate hyperoxia group, p = 0.594). High-normal PaCO2 and moderate hyperoxia increased NIRS values. There were no differences in other secondary outcomes.


Both high-normal PaCO2 and moderate hyperoxia increased NIRS values, but the NSE concentration was unaffected.






Dialysat froid pour une meilleure tolérance hémodynamique de l’HDI prolongée




Fahad Y. Edrees; Sreelatha Katari; Jack D. Baty; Anitha Vijayan

DOI: 10.1097/CCM.0000000000003508, PMID: 30394919




Acute kidney injury requiring renal replacement therapy is associated with high morbidity and mortality. Complications of renal replacement therapy include hemodynamic instability with ensuing shortened treatments, inadequate ultrafiltration, and delay in renal recovery. Studies have shown that lowering dialysate temperature in patients with end-stage renal disease is associated with a decrease in the frequency of intradialytic hypotension. However, data regarding mitigation of hypotension by lowering dialysate temperature in patients with acute kidney injury are scarce. We conducted a prospective, randomized, cross-over pilot study to evaluate the effect of lower dialysate temperature on hemodynamic status of critically ill patients with acute kidney injury during prolonged intermittent renal replacement therapy.


Single-center prospective, randomized, cross-over study.


ICUs and a step down unit in a tertiary referral center.


Acute kidney injury patients undergoing prolonged intermittent renal replacement therapy.


Participants were randomized to start prolonged intermittent renal replacement therapy with dialysate temperature of 35°C or dialysate temperature of 37°C.

Measurements and Main Results:

The primary endpoint was the number of hypotensive events, as defined by any of the following: decrease in systolic blood pressure greater than or equal to 20 mm Hg, decrease in mean arterial pressure greater than or equal to 10 mm Hg, decrease in ultrafiltration, or increase in vasopressor requirements. The number of events was analyzed by Poisson regression and other outcomes with repeated-measures analysis of variance. Twenty-one patients underwent a total of 78 prolonged intermittent renal replacement therapy sessions, 39 in each arm. The number of hypotensive events was twice as high during treatments with dialysate temperature of 37°C, compared with treatments with the cooler dialysate (1.49 ± 1.12 vs 0.72 ± 0.69; incidence rate ratio, 2.06; p ≤ 0.0001). Treatment sessions with cooler dialysate were more likely to reach prescribed ultrafiltration targets.


Patients with acute kidney injury undergoing prolonged intermittent renal replacement therapy with cooler dialysate experienced significantly less hypotension during treatment. Prevention of hemodynamic instability during renal replacement therapy helped to achieve ultrafiltration goals and may help to prevent volume overload in critically ill patients.



Etat de l’art des méthodes statistiques utilisées dans les essais cliniques en réanimation


McCullough et al., CCM 2018


DOI: 10.1097/CCM.0000000000003380


Objectives: Incomplete biostatistical knowledge among clinicians is widely described. This study aimed to categorize and summarize the statistical methodology within recent critical care randomized controlled trials.
Design: Descriptive analysis, with comparison of findings to previous work.
Setting: Ten high-impact clinical journals publishing trials in critical illness.
Subjects: Randomized controlled trials published between 2011 and 2015 inclusive.
Interventions: Data extraction from published reports.
Measurements and Main Results: The frequency and overall proportion of each statistical method encountered, grouped according to those used to generate each trial’s primary outcome and separately according to underlying statistical methodology. Subsequent analysis compared these proportions with previously published reports. A total of 580 statistical tests or methods were identified within 116 original randomized controlled trials published between 2011 and 2015. Overall, the chi-square test was the most commonly encountered (70/116; 60%), followed by the Cox proportional hazards model (63/116; 54%) and logistic regression (53/116; 46%). When classified according to underlying statistical assumptions, the most common types of analyses were tests of 2 × 2 contingency tables and nonparametric tests of rank order. A greater proportion of more complex methodology was observed compared with trial reports from previous work.
Conclusions: Physicians assessing recent randomized controlled trials in critical illness encounter results derived from a substantial and potentially expanding range of biostatistical methods. In-depth training in the assumptions and limitations of these current and emerging biostatistical methods may not be practically achievable for most clinicians, making accessible specialist biostatistical support an asset to evidence-based clinical practice.


Amélioration du pronostique avec une nutrition entérale précoce chez les grands brûlés


Pu et. al., CCM 2018


DOI: 10.1097/CCM.0000000000003445


Objectives: To identify, appraise, and synthesize current evidence to determine whether early enteral nutrition alters patient outcomes from major burn injury.
Data Sources: Medline, Embase, and the China National Knowledge Infrastructure were searched. The close out date was May 1, 2018.
Study Selection: Early enteral nutrition was defined as a standard formula commenced within 24 hours of injury or admission to ICU or burn unit. Comparators included any form of nutrition support “except” early enteral nutrition. Only randomized controlled trials reporting patient-centered outcomes were eligible for inclusion.
Data Extraction: The primary outcome was mortality. Gastrointestinal hemorrhage, sepsis, pneumonia, renal failure, and hospital stay were evaluated as secondary outcomes.
Data Synthesis: Nine-hundred fifty-eight full-text articles were retrieved and screened. Seven randomized controlled trials enrolling 527 participants with major burn injury were included. Compared with all other types of nutrition support, early enteral nutrition significantly reduced mortality (odds ratio, 0.36; 95% CI, 0.18–0.72; p = 0.003; I 2 = 0%). Early enteral nutrition also significantly reduced gastrointestinal hemorrhage (odds ratio, 0.21; 95% CI, 0.09–0.51; p = 0.0005; I 2 = 0%), sepsis (odds ratio, 0.23; 95% CI, 0.11–0.48; p < 0.0001; I 2 = 0%), pneumonia (odds ratio, 0.41; 95% CI, 0.21–0.81; p = 0.01; I 2 = 63%), renal failure (odds ratio, 0.27; 95% CI, 0.09–0.82; p = 0.02; I 2 = 32%), and duration of hospital stay (–15.31 d; 95% CI, –20.43 to –10.20; p < 0.00001; I 2 = 0%).
Conclusions: The improvements in clinical outcomes demonstrated in this meta-analysis are consistent with the physiologic rationale cited to support clinical recommendations for early enteral nutrition made by major clinical practice guidelines: gut integrity is preserved leading to fewer gastrointestinal hemorrhages, less infectious complications, a reduction in consequent organ failures, and a reduction in the onset of sepsis. The cumulative benefit of these effects improves patient survival and reduces hospital length of stay.


Prévention du stress post-traumatique après séjour en réanimation


Kredentser et al., CCM 2018


DOI: 10.1097/CCM.0000000000003367


Objectives: Critical illness can have a significant psychological impact on patients and their families. To inform the design of a larger trial, we assessed feasibility of ICU diaries and psychoeducation to prevent posttraumatic stress disorder, depression, and anxiety following ICU stays.
Design: Four-arm pilot randomized controlled trial.
Setting: A 10-bed tertiary ICU in Winnipeg, MB, Canada.
Patients: Critically ill patients greater than 17 years old with predicted ICU stays greater than 72 hours and mechanical ventilation duration greater than 24 hours.
Interventions: Patients were randomized to usual care, ICU diary, psychoeducation, or both ICU diary and psychoeducation.
Measurements and Main Results: Our primary objective was to determine feasibility measured by enrollment/mo. Secondary outcomes included acceptability of the ICU diary intervention and psychological distress, including patients’ memories 1 week post ICU using the ICU Memory Tool, posttraumatic stress disorder (Impact of Events Scale-Revised), depression, and anxiety symptoms (Hospital Anxiety and Depression Scale) 30 and 90 days post ICU. Over 3.5 years, we enrolled 58 patients, an average of 1.9 participants/mo. Families and healthcare providers wrote a mean of 3.2 diary entries/d (SD, 2.9) and indicated positive attitudes and low perceived burden toward ICU diary participation. A majority of patients reported distressing memories of their ICU stay. Those who received the diary intervention had significantly lower median Hospital Anxiety and Depression Scale anxiety (3.0 [interquartile range, 2–6.25] vs 8.0 [interquartile range, 7–10]; p = 0.01) and depression (3.0 [interquartile range, 1.75–5.25] vs 5.0 [interquartile range, 4–9]; p = 0.04) symptom scores at 90 days than patients who did not receive a diary.
Conclusions: ICU diaries are a feasible intervention in a tertiary Canadian ICU context. Preliminary evidence supports the efficacy of ICU diaries to reduce psychological morbidity following discharge.



Intérêt de mesures répétées du qSOFA chez les patients possiblement septiques


Kievlan et al., CCM 2018


DOI: 10.1097/CCM.0000000000003360


Objectives: Among patients with suspected infection, a single measurement of the quick Sepsis-related Organ Failure Assessment has good predictive validity for sepsis, yet the increase in validity from repeated measurements is unknown. We sought to determine the incremental predictive validity for sepsis of repeated quick Sepsis-related Organ Failure Assessment measurements over 48 hours compared with the initial measurement.
Design: Retrospective cohort study.
Setting: Twelve hospitals in southwestern Pennsylvania in 2012.
Patients: All adult medical and surgical encounters in the emergency department, hospital ward, postanesthesia care unit, and ICU.
Interventions: None.
Measurements and Main Results: Among 1.3 million adult encounters, we identified those with a first episode of suspected infection. Using the maximum quick Sepsis-related Organ Failure Assessment score in each 6-hour epoch from onset of suspected infection until 48 hours later, we characterized repeated quick Sepsis-related Organ Failure Assessment with: 1) summary measures (e.g., mean over 48 hr), 2) crude trajectory groups, and 3) group-based trajectory modeling. We measured the predictive validity of repeated quick Sepsis-related Organ Failure Assessment using incremental changes in the area under the receiver operating characteristic curve for in-hospital mortality beyond that of baseline risk (age, sex, race/ethnicity, and comorbidity). Of 37,591 encounters with suspected infection, 1,769 (4.7%) died before discharge. Both the mean quick Sepsis-related Organ Failure Assessment at 48 hours (area under the receiver operating characteristic, 0.86 [95% CI, 0.85–0.86]) and crude trajectory groups (area under the receiver operating characteristic, 0.83 [95% CI, 0.83–0.83]) improved predictive validity compared with initial quick Sepsis-related Organ Failure Assessment (area under the receiver operating characteristic, 0.79 [95% CI, 0.78–0.80]) (p < 0.001 for both). Group-based trajectory modeling found five trajectories (quick Sepsis-related Organ Failure Assessment always low, increasing, decreasing, moderate, and always high) with greater predictive validity than the initial measurement (area under the receiver operating characteristic, 0.85 [95% CI, 0.84–0.85]; p < 0.001).
Conclusions: Repeated measurements of quick Sepsis-related Organ Failure Assessment improve predictive validity for sepsis using in-hospital mortality compared with a single measurement of quick Sepsis-related Organ Failure Assessment at the time a clinician suspects infection.


Revue sur la gestion des patients polytraumatisés


Tisherman et al., CCM 2018


DOI : 10.1097/CCM.0000000000003407




Etude de coût-efficacité de la PCT en réanimation


Michelle M. A. Kip, et al, CC, 2018




Procalcitonin (PCT) testing can help in safely reducing antibiotic treatment duration in intensive care patients with sepsis. However, the cost-effectiveness of such PCT guidance is not yet known.


A trial-based analysis was performed to estimate the cost-effectiveness of PCT guidance compared with standard of care (without PCT guidance). Patient-level data were used from the SAPS trial in which 1546 patients were randomised. This trial was performed in the Netherlands, which is a country with, on average, low antibiotic use and a short duration of hospital stay. As quality of life among sepsis survivors was not measured during the SAPS, this was derived from a Dutch follow-up study. Outcome measures were (1) incremental direct hospital cost and (2) incremental cost per quality-adjusted life year (QALY) gained from a healthcare perspective over a one-year time horizon. Uncertainty in outcomes was assessed with bootstrapping.


Mean in-hospital costs were €46,081/patient in the PCT group compared with €46,146/patient with standard of care (i.e. − €65 (95% CI − €6314 to €6107); − 0.1%). The duration of the first course of antibiotic treatment was lower in the PCT group with 6.9 vs. 8.2 days (i.e. − 1.2 days (95% CI − 1.9 to − 0.4), − 14.8%). This was accompanied by lower in-hospital mortality of 21.8% vs. 29.8% (absolute decrease 7.9% (95% CI − 13.9% to − 1.8%), relative decrease 26.6%), resulting in an increase in mean QALYs/patient from 0.47 to 0.52 (i.e. + 0.05 (95% CI 0.00 to 0.10); + 10.1%). However, owing to high costs among sepsis survivors, healthcare costs over a one-year time horizon were €73,665/patient in the PCT group compared with €70,961/patient with standard of care (i.e. + €2704 (95% CI − €4495 to €10,005), + 3.8%), resulting in an incremental cost-effectiveness ratio of €57,402/QALY gained. Within this time frame, the probability of PCT guidance being cost-effective was 64% at a willingness-to-pay threshold of €80,000/QALY.


Although the impact of PCT guidance on total healthcare-related costs during the initial hospitalisation episode is likely negligible, the lower in-hospital mortality may lead to a non-significant increase in costs over a one-year time horizon. However, since uncertainty remains, it is recommended to investigate the long-term cost-effectiveness of PCT guidance, from a societal perspective, in different countries and settings.


Carence martiale en Réanimation évaluée par dosage de l’hepcidine



Sigismond Lasocki, et al., CC, 2018

Critical Care201822:314




Iron deficiency is difficult to diagnose in critically ill patients, but may be frequent and may impair recovery. Measurement of hepcidin could help in the diagnosis of iron deficiency. We aim to assess if iron deficiency diagnosed using hepcidin is associated with poorer outcome one year after an intensive care unit stay.


We used the prospective FROG-ICU, multicentre (n = 28 ICUs), observational cohort study of critically ill survivors followed up one year after intensive care unit discharge. Iron deficiency was defined as hepcidin < 20 ng/l, ferritin < 100 ng/l or soluble transferrin receptor (sTfR)/log(ferritin) > 0.8, measured in blood drawn at intensive care unit discharge. Main outcomes were one-year all-cause mortality and poor quality of life (defined as a Short Form 36 (SF-36) score below the median).


Among the 2087 patients in the FROG-ICU cohort, 1570 were discharged alive and 1161 had a blood sample available at intensive care unit discharge and were included in the analysis. Using hepcidin, 429 (37%) patients had iron deficiency, compared to 72 (6%) using ferritin alone and 151 (13%) using the sTfR/log(ferritin) ratio. Iron deficiency diagnosed according to low hepcidin was an independent predictor of one-year mortality (OR 1.51 (1.10–2.08)) as was high sTfR/log ferritin ratio (OR = 1.95 (1.27–3.00)), but low ferritin was not. Severe ID, defined as hepcidin < 10 ng/l, was also an independent predictor of poor one-year physical recovery (1.58 (1.01–2.49)).


Iron deficiency, diagnosed using hepcidin, is very frequent at intensive care unit discharge and is associated with increased one-year mortality and poorer physical recovery. Whether iron treatment may improve these outcomes remains to be investigated.


Revue sur les antibiotiques en aérosol

Feng Xu , et al. Critical Care201822:301


Revue sur les qualités prédictives du qSOFA et du SIRS


S. Maitra et al., CMI, 2018



To identify sensitivity, specificity and predictive accuracy of quick sequential organ failure assessment (qSOFA) score and systemic inflammatory response syndrome (SIRS) criteria to predict in-hospital mortality in hospitalized patients with suspected infection.


This meta-analysis followed the Meta-analysis of Observational Studies in Epidemiology (MOOSE) group consensus statement for conducting and reporting the results of systematic review. PubMed and EMBASE were searched for the observational studies which reported predictive utility of qSOFA score for predicting mortality in patients with suspected or proven infection with the following search words: ‘qSOFA’, ‘q-SOFA’, ‘quick-SOFA’, ‘Quick Sequential Organ Failure Assessment’, ‘quick SOFA’. Sensitivity, specificity, area under receiver operating characteristic (ROC) curves with 95% confidence interval (CI) of qSOFA and SIRS criteria for predicting in-hospital mortality was collected for each study and a 2 × 2 table was created for each study.


Data of 406 802 patients from 45 observational studies were included in this meta-analysis. Pooled sensitivity (95% CI) and specificity (95% CI) of qSOFA ≥2 for predicting mortality in patients who were not in an intensive care unit (ICU) was 0.48 (0.41–0.55) and 0.83 (0.78–0.87), respectively. Pooled sensitivity (95% CI) of qSOFA ≥2 for predicting mortality in patients (both ICU and non-ICU settings) with suspected infection was 0.56 (0.47–0.65) and pooled specificity (95% CI) was 0.78 (0.71–0.83).


qSOFA has been found to be a poorly sensitive predictive marker for in-hospital mortality in hospitalized patients with suspected infection. It is reasonable to recommend developing another scoring system with higher sensitivity to identify high-risk patients with infection.



Quelle posologie de Magnesium pour ralentir des FA ?



Bouida W et al., Acad Emerg Med. 2018 Jul 19.


doi: 10.1111/acem.13522

OBJECTIVES: We aim to determine the benefit of two different doses magnesium sulfate (MgSO4 ) compared to placebo in rate control of rapid atrial fibrillation (AF) managed in the emergency department (ED).

METHODS: We undertook a randomized, controlled, double-blind clinical trial in three university hospital EDs between August 2009 and December 2014. Patients > 18 years with rapid AF (>120 beats/min) were enrolled and randomized to 9 g of intravenous MgSO4 (high-dose group, n = 153), 4.5 g of intravenous MgSO4 (low-dose group, n = 148), or serum saline infusion (placebo group, n = 149), given in addition to atrioventricular (AV) nodal blocking agents. The primary outcome was the reduction of baseline ventricular rate (VR) to 90 beats/min or less or reduction of VR by 20% or greater from baseline (therapeutic response). Secondary outcome included resolution time (defined as the elapsed time from start of treatment to therapeutic response), sinus rhythm conversion rate, and adverse events within the first 24 hours.

RESULTS: At 4 hours, therapeutic response rate was higher in low- and high-MgSO4 groups compared to placebo group; the absolute differences were, respectively, 20.5% (risk ratio [RR] = 2.31, 95% confidence interval [CI] = 1.45-3.69) and +15.8% (RR = 1.89, 95% CI = 1.20-2.99). At 24 hours, compared to placebo group, therapeutic response difference was +14.1% (RR = 9.74, 95% CI = 2.87-17.05) with low-dose MgSO4 and +10.3% (RR = 3.22, 95% CI = 1.45-7.17) with high-dose MgSO4 . The lowest resolution time was observed in the low-dose MgSO4 group (5.2 ± 2 hours) compared to 6.1 ± 1.9 hours in the high-dose MgSO4 group and 8.4 ± 2.5 hours in the placebo group. Rhythm control rate at 24 hours was significantly higher in the low-dose MgSO4 group (22.9%) compared to the high-dose MgSO4 group (13.0%, p = 0.03) and the placebo group (10.7%). Adverse effects were minor and significantly more frequent with high-dose MgSO4 .

CONCLUSIONS: Intravenous MgSO4 appears to have a synergistic effect when combined with other AV nodal blockers resulting in improved rate control. Similar efficacy was observed with 4.5 and 9 g of MgSO4 but a dose of 9 g was associated with more side effects.


Share This:

© 2014 - AJAR Paris – IDF. Tous droits réservés. Wordpress & We Create Web Designs