Title
stringlengths
11
422
Abstract
stringlengths
130
3.94k
Effect of Apneic Oxygenation on Tracheal Oxygen Levels, Tracheal Pressure, and Carbon Dioxide Accumulation: A Randomized, Controlled Trial of Buccal Oxygen Administration
BACKGROUND: Apneic oxygenation via the oral route using a buccal device extends the safe apnea time in most but not all obese patients. Apneic oxygenation techniques are most effective when tracheal oxygen concentrations are maintained >90%. It remains unclear whether buccal oxygen administration consistently achieves this goal and whether significant risks of hypercarbia or barotrauma exist. METHODS: We conducted a randomized trial of buccal or sham oxygenation in healthy, nonobese patients (n = 20), using prolonged laryngoscopy to maintain apnea with a patent airway until arterial oxygen saturation (Spo2) dropped <95% or 750 seconds elapsed. Tracheal oxygen concentration, tracheal pressure, and transcutaneous carbon dioxide (CO2) were measured throughout. The primary outcome was maintenance of a tracheal oxygen concentration >90% during apnea. RESULTS: Buccal patients were more likely to achieve the primary outcome (P <.0001), had higher tracheal oxygen concentrations throughout apnea (mean difference, 65.9%; 95% confidence interval [CI], 62.6%-69.3%; P <.0001), and had a prolonged median (interquartile range) apnea time with Spo2 >94%; 750 seconds (750-750 seconds) vs 447 seconds (405-525 seconds); P <.001. One patient desaturated to Spo2 <95% despite 100% tracheal oxygen. Mean tracheal pressures were low in the buccal (0.21 cm·H2O; SD = 0.39) and sham (0.56 cm·H2O; SD = 1.25) arms; mean difference, -0.35 cm·H2O; 95% CI, 1.22-0.53; P =.41. CO2 accumulation during early apnea before any study end points were reached was linear and marginally faster in the buccal arm (3.16 vs 2.82 mm Hg/min; mean difference, 0.34; 95% CI, 0.30-0.38; P <.001). Prolonged apnea in the buccal arm revealed nonlinear CO2 accumulation that declined over time and averaged 2.22 mm Hg/min (95% CI, 2.21-2.23). CONCLUSIONS: Buccal oxygen administration reliably maintains high tracheal oxygen concentrations, but early arterial desaturation can still occur through mechanisms other than device failure. Whereas the risk of hypercarbia is similar to that observed with other approaches, the risk of barotrauma is negligible. Continuous measurement of advanced physiological parameters is feasible in an apneic oxygenation trial and can assist with device evaluation. © 2019 International Anesthesia Research Society.
Pediatric anesthesiology fellows' perception of quality of attending supervision and medical errors
Background: Appropriate supervision has been shown to reduce medical errors in anesthesiology residents and other trainees across various specialties. Nonetheless, supervision of pediatric anesthesiology fellows has yet to be evaluated. The main objective of this survey investigation was to evaluate supervision of pediatric anesthesiology fellows in the United States. We hypothesized that there was an indirect association between perceived quality of faculty supervision of pediatric anesthesiology fellow trainees and the frequency of medical errors reported. Methods: A survey of pediatric fellows from 53 pediatric anesthesiology fellowship programs in the United States was performed. The primary outcome was the frequency of self-reported errors by fellows, and the primary independent variable was supervision scores. Questions also assessed barriers for effective faculty supervision. Results: One hundred seventy-six pediatric anesthesiology fellows were invited to participate, and 104 (59%) responded to the survey. Nine of 103 (9%, 95% confidence interval [CI], 4%-16%) respondents reported performing procedures, on >1 occasion, for which they were not properly trained for. Thirteen of 101 (13%, 95% CI, 7%-21%) reported making >1 mistake with negative consequence to patients, and 23 of 104 (22%, 95% CI, 15%-31%) reported >1 medication error in the last year. There were no differences in median (interquartile range) supervision scores between fellows who reported >1 medication error compared to those reporting ≤1 errors (3.4 [3.0-3.7] vs 3.4 [3.1-3.7]; median difference, 0; 99% CI, -0.3 to 0.3; P = .96). Similarly, there were no differences in those who reported >1 mistake with negative patient consequences, 3.3 (3.0-3.7), compared with those who did not report mistakes with negative patient consequences (3.4 [3.3-3.7]; median difference, 0.1; 99% CI, -0.2 to 0.6; P = .35). Conclusions: We detected a high rate of self-reported medication errors in pediatric anesthesiology fellows in the United States. Interestingly, fellows' perception of quality of faculty supervision was not associated with the frequency of reported errors. The current results with a narrow CI suggest the need to evaluate other potential factors that can be associated with the high frequency of reported errors by pediatric fellows (eg, fatigue, burnout). The identification of factors that lead to medical errors by pediatric anesthesiology fellows should be a main research priority to improve both trainee education and best practices of pediatric anesthesia. © 2017 International Anesthesia Research Society.
Regional anesthesia for vascular access surgery
BACKGROUND: Approximately 25% of initial arteriovenous fistula (AVF) placementswill fail as a result of thrombosis or failure to develop adequate vessel size andblood flow. Fistula maturation is impacted by patient characteristics and surgicaltechnique, but both increased vein diameter and high fistula blood flow rates arethe most important predictors of successful AVFs. Anesthetic techniques used invascular access surgery (monitored anesthesia care, regional blocks, and generalanesthesia) may affect these characteristics and fistula failure.METHODS: We performed a literature search using key words in the PubMed/MEDLINE database. Seven articles that related to the effects of anesthesia on AVFconstruction, including sympathetic block, vein dilation, blood flow, adverseoutcomes, or patency rates, comprised the sources for this review.RESULTS: Significant vasodilation after regional block administration is seen in boththe cephalic and basilic veins. These vasodilatory properties may assist with AVFsite selection. In the intraoperative and postoperative periods, use of a regionalblock, compared with other anesthetic techniques, resulted in significantly increasedfistula blood flow. The greater sympathetic block contributed to vesseldilation and reduced vasospasm. Use of regional techniques in AVF constructionyielded shorter maturation times, lower failure rates, and higher patency rates.CONCLUSION: Use of regional blocks may improve the success of vascular accessprocedures by producing significant vasodilatation, greater fistula blood flow,sympathectomy-like effects, and decreased maturation time. However, a largescale,prospective, clinical trial comparing the different anesthetic techniques is stillneeded to verify these findings. Copyright © 2009 International Anesthesia Research Society.
Effect of a Rapid Response Team on the Incidence of In-Hospital Mortality
BACKGROUND: Approximately half of the life-limiting events, such as cardiopulmonary arrests or cardiac arrhythmias occurring in hospitals, are considered preventable. These critical events are usually preceded by clinical deterioration. Rapid response teams (RRTs) were introduced to intervene early in the course of clinical deterioration and possibly prevent progression to an event. An RRT was introduced at the Cleveland Clinic in 2009 and transitioned to an anesthesiologist-led system in 2012. We evaluated the association between in-hospital mortality and: (1) the introduction of the RRT in 2009 (primary analysis), and (2) introduction of the anesthesiologist-led system in 2012 and other policy changes in 2014 (secondary analyses). METHODS: We conducted a single-center, retrospective analysis using the medical records of overnight hospitalizations from March 1, 2005, to December 31, 2018, at the Cleveland Clinic. We assessed the association between the introduction of the RRT in 2009 and in-hospital mortality using segmented regression in a generalized estimating equation model to account for within-subject correlation across repeated visits. Baseline potential confounders (demographic factors and surgery type) were controlled for using inverse probability of treatment weighting on the propensity score. We assessed whether in-hospital mortality changed at the start of the intervention and whether the temporal trend (slope) differed from before to after initiation. Analogous models were used for the secondary outcomes. RESULTS: Of 628,533 hospitalizations in our data set, 177,755 occurred before and 450,778 after introduction of our RRT program. Introduction of the RRT was associated with a slight initial increase in in-hospital mortality (odds ratio [95% confidence interval {CI}], 1.17 [1.09-1.25]; P <.001). However, while the pre-RRT slope in in-hospital mortality over time was flat (odds ratio [95% CI] per year, 1.01 [0.98-1.04]; P =.60), the post-RRT slope decreased over time, with an odds ratio per additional year of 0.961 (0.955-0.968). This represented a significant improvement (P <.001) from the pre-RRT slope. CONCLUSIONS: We found a gradual decrease in mortality over a 9-year period after introduction of an RRT program. Although mechanisms underlying this decrease are unclear, possibilities include optimization of RRT implementation, anesthesiology department leadership of the RRT program, and overall improvements in health care delivery over the study period. Our findings suggest that improvements in outcome after RRT introduction may take years to manifest. Further work is needed to better understand the effects of RRT implementation on in-hospital mortality. © 2022 Lippincott Williams and Wilkins. All rights reserved.
Research, education, and nonclinical service productivity of new junior anesthesia faculty during a 2-year faculty development program
BACKGROUND: As a specialty, anesthesiology has relatively low research productivity. Prior studies indicate that junior faculty development programs favorably affect academic performance. We therefore initiated a junior faculty development program and hypothesized that most (>50%) new junior faculty would take <50 nonclinical days to achieve a primary program goal (e.g., investigation or publication), and <5 nonclinical days to achieve a secondary program goal (e.g., teaching or nonclinical service). METHODS: Twenty new junior faculty participated in the 2-year program which had a goal-oriented structure and was supported by nonclinical time, formally assigned mentors, and a didactic curriculum. Goal productivity equaled the number of program goals accomplished divided by the amount of nonclinical time received. Primary goal productivity was expressed as primary goals accomplished per 50 nonclinical days. Secondary goal productivity was expressed as secondary goals accomplished per 5 nonclinical days. RESULTS: Median primary goal productivity was 0.45 primary goals per 50 nonclinical days (25th-75th interquartile range = 0.00-0.73). Contrary to our hypothesis, most new junior faculty needed >50 nonclinical days to achieve a primary goal (17/20, P = 0.0026). Median secondary goal productivity was 0.57 secondary goals per 5 nonclinical days (25th-75th interquartile range = 0.38-0.77). Contrary to our hypothesis, most new junior faculty needed >5 nonclinical days to accomplish a secondary goal (18/20, P = 0.0004). It was not clear that the faculty development program increased program goal productivity. CONCLUSIONS: Even with structured developmental support, most new junior anesthesia faculty needed >50 nonclinical days to achieve a primary (traditional academic) goal and >5 nonclinical days to achieve a secondary goal. Currently, most new anesthesia faculty are not productive in traditional academic activities (research). They are more productive in activities related to clinical care, education, and patient care systems management. Copyright © 2013 International Anesthesia Research Society.
National pediatric anesthesia safety quality improvement program in the United States
BACKGROUND: As pediatric anesthesia has become safer over the years, it is difficult to quantify these safety advances at any 1 institution. Safety analytics (SA) and quality improvement (QI) are used to study and achieve high levels of safety in nonhealth care industries. We describe the development of a multiinstitutional program in the United States, known as Wake-Up Safe (WUS), to determine the rate of serious adverse events (SAE) in pediatric anesthesia and to apply SA and QI in the pediatric anesthesia departments to decrease the SAE rate. METHODS: QI was used to design and implement WUS in 2008. The key drivers in the design were an organizational structure; an information system for the SAE; SA to characterize the SAE; QI to imbed high-reliability care; communications to disseminate the learnings; and engaged leadership in each department. Interventions for the key drivers, included Participation Agreements, Patient Safety Organization designation, IRB approval, Data Management Co., membership fee, SAE standard templates, SA and QI workshops, and department leadership meetings. RESULTS: WUS has 19 institutions, 39 member anesthesiologists, 734 SAE, and 736,365 anesthetics as of March, 2013. The initial members joined at year 1, and initial SAE were recorded by year 2. The SAE rate is 1.4 per 1000 anesthetics. Of SAE, respiratory was most common, followed by cardiac arrest, care escalation, and cardiovascular, collectively 76% of SAE. In care escalation, medication errors and equipment dysfunction were 89%. Of member anesthesiologists, 70% were trained in SA and QI by March 2013; virtually, none had SA and QI expertise before joining WUS. CONCLUSION: WUS documented the incidence and types of SAE nationally in pediatric anesthesiology. Education and application of QI and SA in anesthesia departments are key strategies to improve perioperative safety by WUS. Copyright © 2014 International Anesthesia Research Society.
Understanding the Economic Impact of an Essential Service: Applying Time-Driven Activity-Based Costing to the Hospital Airway Response Team
BACKGROUND: As the United States moves toward value-based care metrics, it will become essential for anesthesia groups nationwide to understand the costs of their services. Time-driven activity-based costing (TDABC) estimates the amount of time it takes to perform a clinical activity by dividing complex tasks into process steps and mapping each step and has historically been used to estimate the costs of various health care services. TDABC is a tool that can be adapted for variable staffing models and the volume of service provided. Anesthesia departments often provide staffing for airway response teams (ART). The economic implications of staffing ART have not been well described. We present a TDABC model for ART activation in a tertiary-care center to estimate the cost incurred by an anesthesiology department to staff an ART. METHODS: Pages received by the Brigham and Women's Hospital ART over a 24-month time period (January 2019 to December 2020) were analyzed and categorized. The local administrative database was queried for the Current Procedural Terminology (CPT) code used to bill for emergency airway placements. Sessions were held by multiple members of the ART to create process maps for the different types of ART activations. We estimated the staffing costs using the estimated time it took for each type of ART activation as well as the data collected for local ART activations. RESULTS: From the paging records, we analyzed 3368 activations of the ART. During the study period, 1044 airways were billed for with emergency airway CPT code. The average revenue collected per airway was $198.45 (95% CI, $190-$207). For STAT/Emergency airway team activations, process maps and non-STAT airway team activations were created, and third subprocess map was created for performing endotracheal intubation. Using the TDABC, the total staffing costs are estimated to be $218,601 for the 2-year study period. The ART generated $207,181 in revenue during the study period. CONCLUSIONS: Our analysis of ART-activation pages suggests that while the revenue generated may cover the cost of staffing the team during ART activations, it does not cover consumable equipment costs. Additionally, the current fee-for-service model relies on the team being able to perform other clinical duties in addition to covering the airway pager and would be impossible to capture using traditional top-down costing methods. By using TDABC, anesthesia groups can demonstrate how certain services, such as ART, are not fully covered by current reimbursement models and how to negotiate for subsidy agreements. As the transition from traditional fee-for-service payments to value-based care models continues in the United States, improving the understanding and communication of medical care costs will be essential. In the United States, it is common for anesthesia groups to receive direct revenue from hospitals to preserve financial viability, and therefore, knowledge of true cost is essential regardless of payer model.1With traditional payment models, what is billable and nonbillable may not reflect either the need for or the cost of providing the service. As anesthesia departments navigate the transition of care from volume to value, actual costs will be essential to understand for negotiations with hospitals for support when services are nonbillable, when revenue from payers does not cover anesthesia costs, and when calculating the appropriate share for anesthesia departments when bundled payments are distributed. © 2022 Lippincott Williams and Wilkins. All rights reserved.
Trainability of Cricoid Pressure Force Application: A Simulation-Based Study
BACKGROUND: Aspiration of gastric contents is a leading cause of airway management–related mortality during anesthesia practice. Cricoid pressure (CP) is widely used during rapid sequence induction to prevent aspiration. National guidelines for CP suggest a target force of 10 N before and 30 N after loss of consciousness. However, few studies have rigorously assessed whether clinicians can be trained to consistently achieve these levels of force. We hypothesized that clinicians can be trained effectively to deliver 10–30 N during application of CP. METHODS: Clinicians (attending anesthesiologist, anesthesiology residents, certified registered nurse anesthetists, or operating room nurses) applied CP on a Vernier force plate simulator with measurements taken at 4 time points over 60 seconds, 2 measurements before and 2 measurements after loss of consciousness. A successful cycle required all 4 time points to be within the target range (10 ± 5 and 30 ± 5 N, respectively). After baseline assessment (n = 100 clinicians), a subset of 40 participants volunteered for education on recommended force targets, underwent self-regulated practice, and then performed 30 1-minute cycles of high-frequency simulation analyzed by cumulative sum analysis to assess their change in performance. RESULTS: At baseline, 5 cycles (1.3% [confidence interval {CI}, 0.3%–2.50%]) out of 400 were successful. Performance improved after education and self-regulated practice (16% successful cycles [CI, 7.8%–25%]), and performance during the last 4 of 30 cycles was 45% (CI, 33%–58%). The odds of success increased over time (odds ratio, 1.1; P < .001). By cumulative sum analysis, however, no subject crossed the h0 line, indicating that no one achieved proficiency of the predefined target forces. CONCLUSIONS: At baseline, performance was poor at achieving target forces specified by national guidelines. Simulation-based training improved the success rate, but no participant achieved the predefined threshold for proficiency. Copyright © 2018 International Anesthesia Research Society
Fluid challenge during anesthesia: A systematic review and meta-analysis
BACKGROUND: Assessing the volemic status of patients undergoing surgery is part of the routine management for the anesthesiologist. This assessment is commonly performed by means of dynamic indexes based on the cardiopulmonary interaction during mechanical ventilation (if available) or by administering a fluid challenge (FC). The FC is used during surgery to optimize predefined hemodynamic targets, the so-called Goal-Directed Therapy (GDT), or to correct hemodynamic instability (non-GDT). METHODS: In this systematic review, we considered the FC components in studies adopting either GDT or non-GDT, to assess whether differences exist between the 2 approaches. In addition, we performed a meta-analysis to ascertain the effectiveness of dynamic indexes pulse pressure variation (PPV) and stroke volume (SV) variation (SVV), in predicting fluid responsiveness. RESULTS: Thirty-five non-GDT and 33 GDT studies met inclusion criteria, including 5017 patients. In the vast majority of non-GDT and GDT studies, the FC consisted in the administration of colloids (85.7% and 90.9%, respectively). In 29 non-GDT studies, the colloid infused was the 6% hydroxyethyl starch (6% HES; 96.6% of this subgroup). In 20 GDT studies, the colloid infused was the 6% HES (66.7% of this subgroup), while in 5 studies was a gelatin (16.7% of this subgroup), in 3 studies an unspecified colloid (10.0% of this subgroup), and in 1 study albumin (3.3%) or, in another study, both HES 6% and gelatin (3.3%). In non-GDT studies, the median volume infused was 500 mL; the time of infusion and hemodynamic target to assess fluid responsiveness lacked standardization. In GDT studies, FC usually consisted in the administration of 250 mL of colloids (48.8%) in 10 minutes (45.4%) targeting an SV increase >10% (57.5%). Only in 60.6% of GDT studies, a safety limit was adopted. PPV pooled area under the curve (95% confidence interval [CI]) was 0.86 (0.80-0.92). The mean (standard deviation) PPV threshold predicting fluid responsiveness was 10.5% (3.2) (range, 8%-15%), while the pooled (95% CI) sensitivity and specificity were 0.80 (0.74-0.85) and 0.83 (0.73-0.91), respectively. SVV pooled area under the curve (95% CI) was 0.87 (0.81-0.93). The mean (standard deviation) SVV threshold predicting fluid responsiveness was 11.3% (3.1) (range, 7.5%-15.5%), while the pooled (95% CI) sensitivity and specificity were 0.82 (0.75-0.89) and 0.77 (0.71-0.82), respectively. CONCLUSIONS: The key components of FC including type of fluid (colloids, often 6% HES), volume (500 and 250 mL in non-GDT studies and GDT studies, respectively), and time of infusion (10 minutes) are quite standardized in operating room. However, pooled sensitivity and specificity of both PPV and SVV are limited. Copyright © 2018 International Anesthesia Research Society.
Reliability and validity of the anesthesiologist supervision instrument when certified registered nurse anesthetists provide scores
BACKGROUND: At many facilities in the United States, supervision of Certified Registered Nurse Anesthetists (CRNAs) is a major daily responsibility of anesthesiologists. We use the term "supervision" to include clinical oversight functions directed toward assuring the quality of clinical care whenever the anesthesiologist is not the sole anesthesia care provider. In our department, the supervision provided by each anesthesiologist working in operating rooms is evaluated each day by the CRNA(s) and anesthesiology resident(s) with whom they worked the previous day. The evaluations utilize the 9 questions developed by de Oliveira Filho for residents to assess anesthesiologist supervision. Each question is answered on a 4-point Likert scale (1 = never, 2 = rarely, 3 = frequently, and 4 = always). We evaluated the reliability and validity of the instrument when used in daily practice by CRNAs. METHODS: The data set included all 7273 daily supervision scores and 1088 comments of 77 anesthesiologists provided by 49 CRNAs, as well as the 6246 scores and 681 comments provided by 62 residents, for dates of service between July 1, 2013, and June 30, 2014. Reliability of the instrument was assessed using its internal consistency. Content analysis was used to associate supervision scores (i.e., mean of the 9 answers) and presence of the verbs "see" or "saw" combined with negation in comments (e.g., "I did not see the anesthesiologist during the case(s) together"). Results are reported as the mean ± SE from among the 6 two-month periods. RESULTS: Supervision scores <2 were provided for 7.2% ± 0.4% of assessments and scores <3 were provided for 36.6% ± 1.1% of assessments, by 18.2 ± 0.9 and 34.0 ± 0.6 CRNAs, respectively (i.e., low scores were not attributable to just a few CRNAs or anesthesiologists). These frequencies were greater than for trainees (anesthesiology residents) (both P < 0.0001). No single question among the 9 questions in the supervision instrument explained CRNA supervision scores <2 (or <3) because of substantial (expected) interquestion correlation. Cronbach's alpha equaled 0.895 ± 0.003 among the 6 two-month periods. Among the CRNA evaluations that included a written comment, the Cronbach's alpha was 0.907 ± 0.003. Thus, like for anesthesiology residents, when used by CRNAs, the questions measured a one-dimensional attribute. The presence of a comment containing the action verb "see" or "saw," with the focus theme ("I did not see"), increased the odds of a CRNA providing a supervision score <2 (odds ratio = 74.2, P = 0.0003) and supervision score <3 (odds ratio = 48.2, P < 0.0001). Limiting consideration to scores with comments, there too was an association between these words and a score <2 (odds ratio = 19.4, P = 0.0003) and a score <3 (odds ratio = 31.5, P < 0.0001). In Iowa, substantial anesthesiologist presence is not required for CRNA billing. More comments containing "see" or "saw" were made by CRNAs rather than residents (n = 75 [97.4%] versus n = 2 [2.6%], respectively, P < 0.0001), indicating face validity of the analysis. If some of the 9 questions were not perceived by the CRNAs as relevant to their interprofessional interactions, Cronbach's alpha would be low, not the 0.907 ± 0.003, above. Similarly, one or more of the individual questions would also not routinely be scored at its upper boundary of 4.0 ("always"). This was not so, being as the score was 4.0 for 24.9% ± 0.3% of the CRNA evaluations, and that score of 4.0 was more common than even the next most common combination of scores (P < 0.0001). CONCLUSIONS: The de Oliveira Filho supervision instrument was designed for use by residents. Our results show that the instrument also is reliable and valid when used by CRNAs. This is important given our previous finding that the CRNA:MD ratio had no correlation with the level of supervision provided. © 2014 International Anesthesia Research Society.
Lack of sensitivity of staffing for 8-hour sessions to standard deviation in daily actual hours of operating room time used for surgeons with long queues
BACKGROUND: At multiple facilities including some in the United Kingdom's National Health Service, the following are features of many surgical-anesthetic teams: i) there is sufficient workload for each operating room (OR) list to almost always be fully scheduled; ii) the workdays are organized such that a single surgeon is assigned to each block of time (usually 8 h); iii) one team is assigned per block; and iv) hardly ever would a team "split" to do cases in more than one OR simultaneously. METHODS: We used Monte-Carlo simulation using normal and Weibull distributions to estimate the times to complete lists of cases scheduled into such 8 h sessions. For each combination of mean and standard deviation, inefficiencies of use of OR time were determined for 10 h versus 8 h of staffing. RESULTS: When the mean actual hours of OR time used averages ≤8 h 25 min, 8 h of staffing has higher OR efficiency than 10 h for all combinations of standard deviation and relative cost of over-run to under-run. When mean ≤8 h 50 min, 10 h staffing has higher OR efficiency. For 8 h 25 min < mean <8 h 50 min, the economic break-even point depends on conditions. For example, break-even is: (a) 8 h 27 min for Weibull, standard deviation of 60 min and relative cost of over-run to under-run of 2.0 versus (b) 8 h 48 min for normal, standard deviation of 0 min and relative cost ratio of 1.50. Although the simplest decision rule would be to staff for 8 h if the mean workload is ≤8 h 40 min and to staff for 10 h otherwise, performance was poor. For example, for the Weibull distribution with mean 8 h 40 min, standard deviation 60 min, and relative cost ratio of 2.00, the inefficiency of use of OR time would be 34% larger if staffing were planned for 8 h instead of 10 h. CONCLUSIONS: For surgical teams with 8 h sessions, use the following decision rule for anesthesiology and OR nurse staffing. If actual hours of OR time used averages ≤8 h 25 min, plan 8 h staffing. If average ≤8 h 50 min, plan 10 h staffing. For averages in between, perform the full analysis of McIntosh et al. (Anesth Analg 2006;103:1499-516). © 2009 International Anesthesia Research Society.
Changes in utilization of intraoperative laboratory testing associated with the introduction of point-of-care testing devices in an academic department
BACKGROUND: Availability of point-of-care testing (POCT) technology may lead to unnecessary testing and expense without improving outcomes. We tested the hypothesis that frequency of intraoperative blood testing (IBT) would increase in association with installation of POCT devices in our surgical suites. METHODS: We performed a retrospective analysis of 38,115 electronic anesthesia records for cases performed in the 1 yr before and 1 yr after POCT installation. For each case, the frequency of IBT was tabulated and the change in frequency of IBT between the study periods was calculated for individual anesthesiologists, for the department as a whole, and for clusters of anesthetizing locations. RESULTS: For the department as a whole, there was no significant change between the before and after study periods in the 13% proportion of cases in which IBT was obtained. For cases in which IBT was used, there was no significant increase in the number of IBTs per case. CONCLUSIONS: We found no significant increase in the overall utilization of IBT associated with POCT presence in noncardiothoracic operating rooms. © 2007 by International Anesthesia Research Society.
Head-elevated patient positioning decreases complications of emergent tracheal intubation in the ward and intensive care unit
BACKGROUND: Based on the data from elective surgical patients, positioning patients in a back-up head-elevated position for preoxygenation and tracheal intubation can improve patient safety. However, data specific to the emergent setting are lacking. We hypothesized that back-up head-elevated positioning would be associated with a decrease in complications related to tracheal intubation in the emergency room environment. METHODS: This retrospective study was approved by the University of Washington Human Subjects Division (Seattle, WA). Eligible patients included all adults undergoing emergent tracheal intubation outside of the operating room by the anesthesiology-based airway service at 2 university-affiliated teaching hospitals. All intubations were through direct laryngoscopy for an indication other than full cardiopulmonary arrest. Patient characteristics and details of the intubation procedure were derived from the medical record. The primary study endpoint was the occurrence of a composite of any intubation-related complication: difficult intubation, hypoxemia, esophageal intubation, or pulmonary aspiration. Multivariable logistic regression was used to estimate the odds of the primary endpoint in the supine versus back-up head-elevated positions with adjustment for a priori-defined potential confounders (body mass index and a difficult intubation prediction score [Mallampati, obstructive sleep Apnea, Cervical mobility, mouth Opening, Coma, severe Hypoxemia, and intubation by a non-Anesthesiologist score]). RESULTS: Five hundred twenty-eight patients were analyzed. Overall, at least 1 intubation-related complication occurred in 76 of 336 (22.6%) patients managed in the supine position compared with 18 of 192 (9.3%) patients managed in the back-up head-elevated position. After adjusting for body mass index and the Mallampati, obstructive sleep Apnea, Cervical mobility, mouth Opening, Coma, severe Hypoxemia, and intubation by a non-Anesthesiologist score, the odds of encountering the primary endpoint during an emergency tracheal intubation in a back-up head-elevated position was 0.47 (95% confidence interval, 0.26-0.83; P = 0.01). CONCLUSIONS: Placing patients in a back-up head-elevated position, compared with supine position, during emergency tracheal intubation was associated with a reduced odds of airway-related complications. © 2016 International Anesthesia Research Society.
Analysis of production, impact, and scientific collaboration on difficult airway through the web of science and scopus (1981-2013)
BACKGROUND: Bibliometrics, the statistical analysis of written publications, is an increasingly popular approach to the assessment of scientific activity. Bibliometrics allows researchers to assess the impact of a field, or research area, and has been used to make decisions regarding research funding. Through bibliometric analysis, we hypothesized that a bibliometric analysis of difficult airway research would demonstrate a growth in authors and articles over time. METHODS: Using the Web of Science (WoS) and Scopus databases, we conducted a search of published manuscripts on the difficult airway from January 1981 to December 2013. After removal of duplicates, we identified 2412 articles. We then analyzed the articles as a group to assess indicators of productivity, collaboration, and impact over this time period. RESULTS: We found an increase in productivity over the study period, with 37 manuscripts published between 1981 and 1990, and 1268 between 2001 and 2010 (P <.001). The difficult airway papers growth rate was bigger than that of anesthesiology research in general, with CAGR (cumulative average growth rate) since 1999 for difficult airway >9% for both WoS and Scopus, and CAGR for anesthesiology as a whole =0.64% in WoS, and =3.30% in Scopus. Furthermore, we found a positive correlation between the number of papers published per author and the number of coauthored manuscripts (P <.001). We also found an increase in the number of coauthored manuscripts, in international cooperation between institutions, and in the number of citations for each manuscript. For any author, we also identified a positive relationship between the number of citations per manuscript and the number of papers published (P <.001). CONCLUSIONS: We found a greater increase over time in the number of difficult airway manuscripts than for anesthesiology research overall. We found that collaboration between authors increases their impact, and that an increase in collaboration increases citation rates. Publishing in English and in certain journals, and collaborating with certain authors and institutions, increases the visibility of manuscripts published on this subject. © Copyright 2017 International Anesthesia Research Society.
Newborn resuscitation skills in health care providers at a zambian tertiary center, and comparison to world health organization standards
BACKGROUND: Birth asphyxia is a leading cause of early neonatal death. In 2013, 32% of neonatal deaths in Zambia were attributable to birth asphyxia and trauma. Basic, timely interventions are key to improving outcomes. However, data from the World Health Organization suggest that resuscitation is often not initiated, or is conducted suboptimally. Currently, there are little data on the quality of newborn resuscitation in the context of a tertiary center in a lower-middle income country. We aimed to measure the competencies of clinical practitioners responsible for newborn resuscitation. METHODS: This observational study was conducted over 5 months in Zambia. Health care professionals were recruited from anesthesia, pediatrics, and midwifery. Newborn skills and knowledge were examined using the following: (1) multiple-choice questions; (2) a ventilation skills test; and (3) 2 low-medium fidelity simulation scenarios. Participant demographics including previous resuscitation training and a self-efficacy rating score were noted. The primary outcome examined performance scores in a simulated scenario, which assessed the care of a newborn that failed to respond to basic interventions. Secondary outcome measures included apnea times after delivery and performance in the other assessments. RESULTS: Seventy-eight participants were enrolled into the study (13 physician anesthesiology residents, 13 pediatric residents, and 52 midwives). A significant difference in interprofessional performance was observed when examining checklist scores for the unresponsive newborn simulated scenario (P = .006). The median (quartiles) checklist score (out of 18) was 14.0 (13.0-14.75) for the anesthesiologists, 11.0 (8.5-12.3) for the pediatricians, and 10.8 (8.3-13.9) for the midwives. A score of 14 or more was required to pass the scenario. There was no significant difference in performance between participants with and without previous newborn resuscitation training (P = .246). The median (quartiles) apnea time after delivery was significantly different between all groups (P = .01) with anesthetic and pediatric residents performing similarly, 61 (37-97) and 63 (42.5-97.5) seconds, respectively. The midwifery participants displayed a significantly longer apnea time, 93.5 (66.3-129) seconds. Self-efficacy rating scores displayed no correlation between confidence level and the primary outcome, Spearman coefficient 0.06 (P = .55). CONCLUSIONS: Newborn resuscitation skills among health care professionals are varied. Midwives lead the majority of deliveries with anesthesiologists and pediatricians only being present at operative or high-risk births. It is therefore common that midwifery practitioners will initiate resuscitation. Despite this, midwives perform poorly when compared to anesthesia and pediatric residents. To address this discrepancy, a multidisciplinary, simulation-based newborn resuscitation program should be considered with continual clinical reenforcement of best practice. Copyright © 2018 International Anesthesia Research Society.
Open Reimplementation of the BIS Algorithms for Depth of Anesthesia
BACKGROUND: BIS (a brand of processed electroencephalogram [EEG] depth-of-anesthesia monitor) scores have become interwoven into clinical anesthesia care and research. Yet, the algorithms used by such monitors remain proprietary. We do not actually know what we are measuring. If we knew, we could better understand the clinical prognostic significance of deviations in the score and make greater research advances in closed-loop control or avoiding postoperative cognitive dysfunction or juvenile neurological injury. In previous work, an A-2000 BIS monitor was forensically disassembled and its algorithms (the BIS Engine) retrieved as machine code. Development of an emulator allowed BIS scores to be calculated from arbitrary EEG data for the first time. We now address the fundamental questions of how these algorithms function and what they represent physiologically. METHODS: EEG data were obtained during induction, maintenance, and emergence from 12 patients receiving customary anesthetic management for orthopedic, general, vascular, and neurosurgical procedures. These data were used to trigger the closely monitored execution of the various parts of the BIS Engine, allowing it to be reimplemented in a high-level language as an algorithm entitled ibis. Ibis was then rewritten for concision and physiological clarity to produce a novel completely clear-box depth-of-anesthesia algorithm titled openibis. RESULTS: The output of the ibis algorithm is functionally indistinguishable from the native BIS A-2000, with r = 0.9970 (0.9970-0.9971) and Bland-Altman mean difference between methods of -0.25 ± 2.6 on a unitless 0 to 100 depth-of-anesthesia scale. This precision exceeds the performance of any earlier attempt to reimplement the function of the BIS algorithms. The openibis algorithm also matches the output of the native algorithm very closely (r = 0.9395 [0.9390-0.9400], Bland-Altman 2.62 ± 12.0) in only 64 lines of readable code whose function can be unambiguously related to observable features in the EEG signal. The operation of the openibis algorithm is described in an intuitive, graphical form. CONCLUSIONS: The openibis algorithm finally provides definitive answers about the BIS: the reliance of the most important signal components on the low-gamma waveband and how these components are weighted against each other. Reverse engineering allows these conclusions to be reached with a clarity and precision that cannot be obtained by other means. These results contradict previous review articles that were believed to be authoritative: the BIS score does not appear to depend on a bispectral index at all. These results put clinical anesthesia research using depth-of-anesthesia scores on a firm footing by elucidating their physiological basis and enabling comparison to other animal models for mechanistic research. © 2022 Lippincott Williams and Wilkins. All rights reserved.
Do technical skills correlate with non-technical skills in crisis resource management: A simulation study
Background: Both technical skills (TS) and non-technical skills (NTS) are key to ensuring patient safety in acute care practice and effective crisis management. These skills are often taught and assessed separately. We hypothesized that TS and NTS are not independent of each other, and we aimed to evaluate the relationship between TS and NTS during a simulated intraoperative crisis scenario. Methods: This study was a retrospective analysis of performances from a previously published work. After institutional ethics approval, 50 anaesthesiology residents managed a simulated crisis scenario of an intraoperative cardiac arrest secondary to a malignant arrhythmia. We used a modified Delphi approach to design a TS checklist, specific for the management of a malignant arrhythmia requiring defibrillation. All scenarios were recorded. Each performance was analysed by four independent experts. For each performance, two experts independently rated the technical performance using the TS checklist, and two other experts independently rated NTS using the Anaesthetists' Non-Technical Skills score. Results: TS and NTS were significantly correlated to each other (r=0.45, P<0.05). Conclusions: During a simulated 5 min resuscitation requiring crisis resource management, our Results indicate that TS and NTS are related to one another. This research provides the basis for future studies evaluating the nature of this relationship, the influence of NTS training on the performance of TS, and to determine whether NTS are generic and transferrable between crises that require different TS. © 2012 The Author [2012].
A survey evaluating burnout, health status, depression, reported alcohol and substance use, and social support of anesthesiologists
BACKGROUND: Burnout affects all medical specialists, and concern about it has become common in today's health care environment. The gold standard of burnout measurement in health care professionals is the Maslach Burnout Inventory-Human Services Survey (MBI-HSS), which measures emotional exhaustion, depersonalization (DP), and personal accomplishment. Besides affecting work quality, burnout is thought to affect health problems, mental health issues, and substance use negatively, although confirmatory data are lacking. This study evaluates some of these effects. METHODS: In 2011, the American Society of Anesthesiologists and the journal Anesthesiology cosponsored a webinar on burnout. As part of the webinar experience, we included access to a survey using MBI-HSS, 12-item Short Form Health Survey (SF-12), Social Support and Personal Coping (SSPC-14) survey, and substance use questions. Results were summarized using sample statistics, including mean, standard deviation, count, proportion, and 95% confidence intervals. Adjusted linear regression methods examined associations between burnout and substance use, SF-12, SSPC-14, and respondent demographics. RESULTS: Two hundred twenty-one respondents began the survey, and 170 (76.9%) completed all questions. There were 266 registrants total (31 registrants for the live webinar and 235 for the archive event), yielding an 83% response rate. Among respondents providing job titles, 206 (98.6%) were physicians and 2 (0.96%) were registered nurses. The frequency of high-risk responses ranged from 26% to 59% across the 3 MBI-HSS categories, but only about 15% had unfavorable scores in all 3. Mean mental composite score of the SF-12 was 1 standard deviation below normative values and was significantly associated with all MBI-HSS components. With SSPC-14, respondents scored better in work satisfaction and professional support than in personal support and workload. Males scored worse on DP and personal accomplishment and, relative to attending physicians, residents scored worse on DP. There was no significant association between MBI-HSS and substance use. CONCLUSIONS: Many anesthesiologists exhibit some high-risk burnout characteristics, and these are associated with lower mental health scores. Personal and professional support were associated with less emotional exhaustion, but overall burnout scores were associated with work satisfaction and professional support. Respondents were generally economically satisfied but also felt less in control at work and that their job kept them from friends and family. The association between burnout and substance use may not be as strong as previously believed. Additional work, perhaps with other survey instruments, is needed to confirm our results. © 2017 International Anesthesia Research Society.
Defining excellence in anaesthesia: the role of personal qualities and practice environment
Background: Calls for reform to postgraduate medical training structures in the UK have included suggestions that training should foster excellence and not simply ensure competence. Methods: We conducted a modified Delphi-type survey starting with an e-mail request to specialist anaesthetists involved in education, asking them to identify the attributes of an excellent anaesthetist. In focused group interviews, their coded and categorized responses were ranked, and suggestions were made for incorporation into anaesthesia education. We also compared the findings with currently available professional and educational guidance. Results: Our expert group strongly expressed the view that while superior knowledge and skills, associated with exceptional performance in clinical work, were fundamental to the excellent practitioner, they were not sufficient in themselves. A group of attributes that were personal qualities and functions of personality were also considered essential. The defining characteristic of excellence was, perhaps, the continuing urge to seek challenges and learn from them. Other high-ranking characteristics included clinical skills, interest in teaching, conscientiousness, innovation/originality, communication skills, and good relationships with patients. Knowledge for its own sake (personal involvement in research) was not rated highly, but applied knowledge was judged to underlie many of the most important categories. Conclusions: The achievement of excellence in anaesthesia is likely to depend on the successful interplay of individuals’ personal qualities and the environment in which they work. Thus, not only trainees but also educational supervisors, heads of departments, and those responsible for organizing training systems all have a part to play in the encouragement of excellence. © 2011 The Author(s)
Trends in central venous catheter insertions by Anesthesia providers: An analysis of the medicare physician supplier procedure summary from 2007 to 2016
BACKGROUND: Central line insertion is a core skill for anesthesiologists. Although recent technical advances have increased the safety of central line insertion and reduced the risk of central line-associated infection, noninvasive hemodynamic monitoring and improved intravenous access techniques have also reduced the need for perioperative central venous access. We hypothesized that the number of central lines inserted by anesthesiologists has decreased over the past decade. To test our hypothesis, we reviewed the Medicare Physician Supplier Procedure Summary (PSPS) database from 2007 to 2016. METHODS: Claims for central venous catheter placement were identified in the Medicare PSPS database for nontunneled and tunneled central lines. Pulmonary artery catheter insertion was included as a nontunneled line claim. We stratified line insertion claims by specialty for Anesthesiology (including Certified Registered Nurse Anesthetists and Anesthesiology Assistants), Surgery, Radiology, Pulmonary/Critical Care, Emergency Physicians, Internal Medicine, and practitioners who were not anesthesia providers such as Advanced Practice Nurses (APNs) and Physician Assistants (PAs). Utilization rates per 10,000 Medicare beneficiaries were then calculated by specialty and year. Time-based trends were analyzed using Joinpoint linear regression, and the Average Annual Percent Change (AAPC) was calculated. RESULTS: Between 2007 and 2016, total claims for central venous catheter insertions of all types decreased from 440.9 to 325.3 claims/10,000 beneficiaries (AAPC = -3.4, 95% confidence interval [CI], -3.6 to -3.2: P < .001). When analyzed by provider specialty and year, the number of nontun-neled line insertion claims fell from 43.1 to 15.9 claims/10,000 (AAPC = -7.1; -7.3 to -7.0: P < .001) for surgeons, from 21.3 to 18.5 claims/10,000 (AAPC = -2.5; -2.8 to -2.1: P < .001) for radiologists, and from 117.4 to 72.7 claims/10,000 (AAPC = -5.2; 95% CI, -6.3 to -4.0: P < .001) for anesthesia providers. In contrast, line insertions increased from 18.2 to 26.0 claims/10,000 (AAPC = 3.2; 2.3-4.2: P < .001) for Emergency Physicians and from 3.2 to 9.3 claims/10,000 (AAPC = 6.0; 5.1-6.9: P < .001) for PAs and APNs who were not anesthesia providers. Among anesthesia providers, the share of line claims made by nurse anesthetists increased by 14.5% over the time period. CONCLUSIONS: We observed a 38.3% decrease in claims for nontunneled central lines placed by anesthesiologists from 2007 to 2016. These findings have implications for anesthesiology resident training and maintenance of competence among practicing clinicians. Further research is needed to clarify the effect of decreasing line insertion numbers on line insertion competence among anesthesiologists. Copyright © 2019 International Anesthesia Research Society.
South African Paediatric Surgical Outcomes Study: a 14-day prospective, observational cohort study of paediatric surgical patients
Background: Children comprise a large proportion of the population in sub-Saharan Africa. The burden of paediatric surgical disease exceeds available resources in Africa, potentially increasing morbidity and mortality. There are few prospective paediatric perioperative outcomes studies, especially in low- and middle-income countries (LMICs). Methods: We conducted a 14-day multicentre, prospective, observational cohort study of paediatric patients (aged <16 yrs) undergoing surgery in 43 government-funded hospitals in South Africa. The primary outcome was the incidence of in-hospital postoperative complications. Results: We recruited 2024 patients at 43 hospitals. The overall incidence of postoperative complications was 9.7% [95% confidence interval (CI): 8.4–11.0]. The most common postoperative complications were infective (7.3%; 95% CI: 6.2–8.4%). In-hospital mortality rate was 1.1% (95% CI: 0.6–1.5), of which nine of the deaths (41%) were in ASA physical status 1 and 2 patients. The preoperative risk factors independently associated with postoperative complications were ASA physcial status, urgency of surgery, severity of surgery, and an infective indication for surgery. Conclusions: The risk factors, frequency, and type of complications after paediatric surgery differ between LMICs and high-income countries. The in-hospital mortality is 10 times greater than in high-income countries. These findings should be used to develop strategies to improve paediatric surgical outcomes in LMICs, and support the need for larger prospective, observational paediatric surgical outcomes research in LMICs. Clinical trial registration: NCT03367832. © 2018 British Journal of Anaesthesia
Discrepancies between randomized controlled trial registry entries and content of corresponding manuscripts reported in anesthesiology journals
BACKGROUND: Clinical trial registries have been created to reduce reporting bias. Study registration enables the examination of discrepancies between the original study design and the final results reported in the literature. The main objective of the current investigation is to compare the original clinical trial registrations and the corresponding published results in high-impact anesthesiology journals. Specifically, we examined the rates of major discrepancies (i.e., involving primary outcome, sample size calculation, or study intervention). METHODS: The 5 highest-impact factor anesthesiology journals (Anaesthesia, Anesthesia & Analgesia, Anesthesiology, British Journal of Anaesthesia, and Regional Anesthesia and Pain Medicine) were screened for randomized controlled trials published in 2013. A major discrepancy was defined as a difference in the content of the manuscript compared with the original entry in a clinical trial registry for at least one of the 3 areas: primary outcome, target sample size, and study intervention. The type of primary outcome discrepancy was further classified as adding/omitting measures or outcomes, downgrading/upgrading from primary to secondary outcomes, or changing the definition of the outcomes measured. RESULTS: Two hundred one articles were included in the final analysis. One hundred thirty of 201 (64%; 95% confidence interval [CI], 57%-71%) published clinical trials were not prospectively registered as recommended by the International Committee of Medical Journal Editors. Registration rates were significantly lower between studies performed in the United States, 15 of 40 (37%), compared with studies not performed in the United States, 92 of 161 (57%), P = 0.03. Fifty-two of 107 (48%; 95% CI, 39%-58%) registered trials had a major discrepancy when the published manuscript was compared with the clinical trial registration. Thirty-one of the 46 (67%; 95% CI, 51%-80%) primary outcome discrepancies had changes in the outcome with characteristics of reporting bias. CONCLUSIONS: We detected a high rate of major discrepancies between the published results and the original registered protocols for clinical trial manuscripts in high-impact anesthesiology journals. Future action to reduce the negative impact of reporting bias in the anesthesiology field is warranted. Copyright © 2015 International Anesthesia Research Society.
Closed-loop fluid administration compared to anesthesiologist management for hemodynamic optimization and resuscitation during surgery: An in vivo study
BACKGROUND: Closed-loop systems have been designed to assist practitioners in maintaining stability of various physiologic variables in the clinical setting. In this context, we recently performed in silico testing of a novel closed-loop fluid management system that is designed for cardiac output and pulse pressure variation monitoring and optimization. The goal of the present study was to assess the effectiveness of this newly developed system in optimizing hemodynamic variables in an in vivo surgical setting. METHODS: Sixteen Yorkshire pigs underwent a 2-phase hemorrhage protocol and were resuscitated by either the Learning Intravenous Resuscitator closed-loop system or an anesthesiologist. Median hemodynamic values and variation of hemodynamics were compared between groups. RESULTS: Cardiac index (in liters per minute per square meter) and stroke volume index (in milliliters per square meter) were higher in the closed-loop group compared with the anesthesiologist group over the protocol (3.7 [3.4-4.1] vs 3.5 [3.2-3.9]; 95% Wald confidence interval, ?0.5 to ?0.23; P < 0.0005 and 40 [34-45] vs 36 [31-38]; 95% Wald confidence interval, ?5.9 to ?3.1; P < 0.0005, respectively). There was no significant difference in total fluid administration between the closed-loop and anesthesiologist groups (3685 [3230-4418] vs 3253 [2735-3926] mL; 95% confidence interval, ?1651 to 431; P = 0.28). Closed-loop group animals also had lower coefficients of variance of cardiac index and stroke volume index during the protocol (11% [10%-16%] vs 22% [18%-23%]; confidence interval, 0.8%-12.3%; P = 0.02 and 11% [8%-16%] vs 17% [13%-21%]; confidence interval, 0.2%-11.4%; P = 0.04, respectively). CONCLUSION: This in vivo study building on previous simulation work demonstrates that the closed-loop fluid management system used in this experiment can perform fluid resuscitation during mild and severe hemorrhages and is able to maintain high cardiac output and stroke volume while reducing hemodynamic variability. Copyright © 2013 International Anesthesia Research Society.
The Current State of Combined Pediatric Anesthesiology-Critical Care Practice: A Survey of Dual-Trained Practitioners in the United States
Background: Combined practice in pediatric anesthesiology (PA) and pediatric critical care medicine (PCCM) was historically common but has declined markedly with time. The reasons for this temporal shift are unclear, but existing evidence suggests that length of training is a barrier to contemporary trainees. Among current practitioners, restriction in dual-specialty practice also occurs, for reasons that are unknown at present. We sought to describe the demographics of this population, investigate their perceptions about the field, and consider factors that lead to attrition. METHODS: We conducted a cross-sectional, observational study of physicians in the United States with a combined practice in PA and PCCM. The survey was distributed electronically and anonymously to the distribution list of the Pediatric Anesthesia Leadership Council (PALC) of the Society for Pediatric Anesthesia (SPA), directing the recipients to forward the link to their faculty meeting our inclusion criteria. Attending-level respondents (n = 62) completed an anonymous, 40-question multidomain survey. RESULTS: Forty-seven men and 15 women, with a median age of 51, completed the survey. Major leadership positions are held by 44%, and 55% are externally funded investigators. A minority (26%) have given up one or both specialties, citing time constraints and politics as the dominant reasons. Duration of training was cited as the major barrier to entry by 77%. Increasing age and faculty rank and lack of a comparably trained institutional colleague were associated with attrition from dual-specialty practice. The majority (88%) reported that they would do it all again. CONCLUSIONS: The current cohort of pediatric anesthesiologist-intensivists in the United States is a small but accomplished group of physicians. Efforts to train, recruit, and retain such providers must address systematic barriers to completion of the requisite training and continued practice. © 2021 Lippincott Williams and Wilkins. All rights reserved.
Defining competence in obstetric epidural anaesthesia for inexperienced trainees
Background: Cumulative sum (CUSUM) analysis has been used for assessing competence of trainees learning new technical skills. One of its disadvantages is the required definition of acceptable and unacceptable success rates. We therefore monitored the development of competence amongst trainees new to obstetric epidural anaesthesia in a large public hospital. Methods: Obstetric epidural data were collected prospectively between January 1996 and December 2011. Success rates for inexperienced trainees were calculated retrospectively for (1) the whole database, (2) for each consecutive attempt and (3) each trainee's individual overall success rate. Acceptable and unacceptable success rates were defined and CUSUM graphs generated for each trainee. Competence was assessed for each trainee and the number of attempts to reach competence recorded. Results: Mean (SD) success rate for all inexperienced trainees was 76.8 (0.1%), range 63-90%. Consecutive attempt success rate produced a learning curve with a mean success rate commencing at 58% on attempt 1. After attempt 10 the attempt number had no effect on subsequent success rates. From these results, the acceptable and unacceptable success rateswere set at 65 and 55% respectively. CUSUM graphs demonstrated 76 out of 81 trainees competent after a mean of 46 (22) attempts. Conclusions: CUSUM is useful for assessing trainee epidural competence. Trainees require approximately 50 attempts, as defined by CUSUM, to reach competence. © The Author 2015. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved.
A Comparison of Measurements of Change in Respiratory Status in Spontaneously Breathing Volunteers by the ExSpiron Noninvasive Respiratory Volume Monitor Versus the Capnostream Capnometer
BACKGROUND: Current respiratory monitoring technologies such as pulse oximetry and capnography have been insufficient to identify early signs of respiratory compromise in nonintubated patients. Pulse oximetry, when used appropriately, will alert the caregiver to an episode of dangerous hypoxemia. However, desaturation lags significantly behind hypoventilation and alarm fatigue due to false alarms poses an additional problem. Capnography, which measures end-tidal CO2 (Etco2) and respiratory rate (RR), has not been universally used for nonintubated patients for multiple reasons, including the inability to reliably relate Etco2 to the level of impending respiratory compromise and lack of patient compliance. Serious complications related to respiratory compromise continue to occur as evidenced by the Anesthesiology 2015 Closed Claims Report. The Anesthesia Patient Safety Foundation has stressed the need to improve monitoring modalities so that "no patient will be harmed by opioid-induced respiratory depression." A recently available, Food and Drug Administration-approved noninvasive respiratory volume monitor (RVM) can continuously and accurately monitor actual ventilation metrics: tidal volume, RR, and minute ventilation (MV). We designed this study to compare the capabilities of capnography versus the RVM to detect changes in respiratory metrics. METHODS: Forty-eight volunteer subjects completed the study. RVM measurements (MV and RR) were collected simultaneously with capnography (Etco2 and RR) using 2 sampling methods (nasal scoop cannula and snorkel mouthpiece with in-line Etco2 sensor). For each sampling method, each subject performed 6 breathing trials at 3 different prescribed RRs (slow [5 min-1], normal [12.6 ± 0.6 min-1], and fast [25 min-1]). All data are presented as mean ± SEM unless otherwise indicated. RESULTS: Following transitions in prescribed RRs, the RVM reached a new steady state value of MV in 37.7 ± 1.4 seconds while Etco2 changes were notably slower, often failing to reach a new asymptote before a 2.5-minute threshold. RRs as measured by RVM and capnography during steady breathing were strongly correlated (R = 0.98 ± 0.01, bias = Capnograph-based RR - RVM-based RR = 0.21 ± 1.24 [SD] min-1). As expected, changes in MV were negatively correlated with changes in Etco2. However, large changes in MV following transitions in prescribed RR resulted in relatively small changes in Etco2 (instrument sensitivity = ΔEtco2/ΔMV = -0.71 ± 0.11 and -0.55 ± 0.11 mm Hg per 1 L/min for nasal and in-line sampling, respectively). Nasal cannula Etco2 measurements were on average 4 mm Hg lower than in-line measurements. CONCLUSIONS: RVM measurements of MV change more rapidly and by a greater degree than capnography in response to respiratory changes in nonintubated patients. Earlier detection could enable earlier intervention that could potentially reduce frequency and severity of complications due to respiratory depression. Copyright © 2016 International Anesthesia Research Society.
Development of education and research in anesthesia and intensive care medicine at the university teaching hospital in Lusaka, Zambia: A descriptive observational study
BACKGROUND: Data from 2006 show that the practice of anesthesia at the University Teaching Hospital in Lusaka, Zambia was underdeveloped by international standards. Not only was there inadequate provision of resources related to environment, equipment, and drugs, but also a severe shortage of staff, with no local capability to train future physician anesthetic providers. There was also no research base on which to develop the specialty. This study aimed to evaluate patient care, education and research to determine whether conditions had changed a decade later. METHODS: A mix of qualitative data and quantitative data was gathered to inform the current state of anesthesia at the University Teaching Hospital, Lusaka, Zambia. Semistructured interviews were conducted with key staff identified by purposive sampling, including staff who had worked at the hospital throughout 2006 to 2015. Further data detailing conditions in the environment were collected by reviewing relevant departmental and hospital records spanning the study period. All data were analyzed thematically, using the framework described in the 2006 study, which described patient care, education, and research related to anesthetic practice at the hospital. RESULTS: There have been positive developments in most areas of anesthetic practice, with the most striking being implementation of a postgraduate training program for physician anesthesiologists. This has increased physician anesthesia staff in Zambia 6-fold within 4 years, and created an active research stream as part of the program. Standards of monitoring and availability of drugs have improved, and anesthetic activity has expanded out of operating theaters into the rest of the hospital. A considerable increase in the number of cesarean deliveries performed under spinal anesthetic may be a marker for safer anesthetic practice. Anesthesiologists have yet to take responsibility for the management of pain. CONCLUSIONS: The establishment of international partnerships to support postgraduate training of physician anesthetists in Zambia has created a significant increase in the number of anesthesia providers and has further developed nearly all aspects of anesthetic practice. The facilitation of the training program by a global health partnership has leveraged high-level support for the project and provided opportunities for North-South and international learning. © 2017 International Anesthesia Research Society.
An international survey of airway management education in 61 countries†
Background: Deficiencies in airway management skills and judgement contribute to poor outcomes. Airway management practice guidelines emphasise the importance of education. Little is known about the global uptake of guidelines, availability of equipment, provision of training, assessment of skills, and confidence with procedures. Methods: We devised a survey to examine these issues. Initially, 24 127 anaesthetists were questioned in New Zealand, Canada, South Africa, UK, India, and Germany, representing the home countries of the members of the Worldwide Airway Meeting (2015) Education Group; however, the survey could be forwarded to others. The survey was open for a maximum of 90 days. Results: We received 4948 fully or partially completed surveys from 61 countries: 33 high-income and 28 middle- or low-income countries. Most respondents were consultants (77.2%, n=4948), and the remainder trainees, with a male/female ratio of 1.8:1 (3105 males, n=4866). Of those responding, 1358 (76.6%, n=1798) were members of an airway interest group. Most respondents (91.3% of 2910) agreed with assessment of airway skills, fewer (2237; 59.7%, n=3750) reported requiring airway training for completion of training, and only 810 (33.6%, n=2408) reported it as a requirement for continuing medical education. Reported confidence was lowest for awake tracheal intubation, front-of-neck access, and retrograde intubation. Conclusions: Global training is variable in its delivery and necessity. Confidence is limited in potentially life-saving techniques. The desire for assessment appears universal and may improve standards, but in resource- or time-limited environments this will be challenging. © 2020
Enhancing Feedback on Professionalism and Communication Skills in Anesthesia Residency Programs
BACKGROUND: Despite its importance, training faculty to provide feedback to residents remains challenging. We hypothesized that, overall, at 4 institutions, a faculty development program on providing feedback on professionalism and communication skills would lead to (1) an improvement in the quantity, quality, and utility of feedback and (2) an increase in feedback containing negative/constructive feedback and pertaining to professionalism/communication. As secondary analyses, we explored these outcomes at the individual institutions. METHODS: In this prospective cohort study (October 2013 to July 2014), we implemented a video-based educational program on feedback at 4 institutions. Feedback records from 3 months before to 3 months after the intervention were rated for quality (0-5), utility (0-5), and whether they had negative/constructive feedback and/or were related to professionalism/communication. Feedback records during the preintervention, intervention, and postintervention periods were compared using the Kruskal-Wallis and χ 2 tests. Data are reported as median (interquartile range) or proportion/percentage. RESULTS: A total of 1926 feedback records were rated. The institutions overall did not have a significant difference in feedback quantity (preintervention: 855/3046 [28.1%]; postintervention: 896/3327 [26.9%]; odds ratio: 1.06; 95% confidence interval, 0.95-1.18; P =.31), feedback quality (preintervention: 2 [1-4]; intervention: 2 [1-4]; postintervention: 2 [1-4]; P =.90), feedback utility (preintervention: 1 [1-3]; intervention: 2 [1-3]; postintervention: 1 [1-2]; P =.61), or percentage of feedback records containing negative/constructive feedback (preintervention: 27%; intervention: 32%; postintervention: 25%; P =.12) or related to professionalism/communication (preintervention: 23%; intervention: 33%; postintervention: 24%; P =.03). Institution 1 had a significant difference in feedback quality (preintervention: 2 [1-3]; intervention: 3 [2-4]; postintervention: 3 [2-4]; P =.001) and utility (preintervention: 1 [1-3]; intervention: 2 [1-3]; postintervention: 2 [1-4]; P =.008). Institution 3 had a significant difference in the percentage of feedback records containing negative/constructive feedback (preintervention: 16%; intervention: 28%; postintervention: 17%; P =.02). Institution 2 had a significant difference in the percentage of feedback records related to professionalism/communication (preintervention: 26%; intervention: 57%; postintervention: 31%; P <.001). CONCLUSIONS: We detected no overall changes but did detect different changes at each institution despite the identical intervention. The intervention may be more effective with new faculty and/or smaller discussion sessions. Future steps include refining the rating system, exploring ways to sustain changes, and investigating other factors contributing to feedback quality and utility. © Copyright 2017 International Anesthesia Research Society.
Do You Really Mean It? Assessing the Strength, Frequency, and Reliability of Applicant Commitment Statements during the Anesthesiology Residency Match
BACKGROUND: Despite the critical nature of the residency interview process, few metrics have been shown to adequately predict applicant success in matching to a given program. While evaluating and ranking potential candidates, bias can occur when applicants make commitment statements to a program. Survey data show that pressure to demonstrate commitment leads applicants to express commitment to multiple institutions including telling >1 program that they will rank them #1. The primary purpose of this cross-sectional observational study is to evaluate the frequency of commitment statements from applicants to 5 anesthesiology departments during a single interview season, report how often each statement is associated with a successful match, and identify how frequently candidates incorrectly represented commitments to rank a program #1. METHODS: During the 2014 interview season, 5 participating anesthesiology programs collected written and verbal communications from applicants. Three residency program directors independently reviewed the statements to classify them into 1 of 3 categories; guaranteed commitment, high rank commitment, or strong interest. Each institution provided a deidentified rank list with associated commitment statements, biographical data, whether candidates were ranked-to-match, and if they successfully matched. RESULTS: Program directors consistently differentiated among strong interest, high rank, and guaranteed commitment statements with κ coefficients of 0.9 (95% CI, 0.8-0.9) or greater between any pair of reviewers. Overall, 35.8% of applicants (226/632) provided a statement demonstrating at least strong interest and 5.4% (34/632) gave guaranteed commitment statements. Guaranteed commitment statements resulted in a 95.7% match rate to that program in comparison to statements of high rank (25.6%), strong interest (14.6%), and those who provided no statement (5.9%). For those providing guaranteed commitment statements, it can be assumed that the 1 candidate (4.3%) who did not match incorrectly represented himself. Variables such as couples match, "R" positions, and not being ranked-to-match on both advanced and categorical rank lists were eliminated because they can result in a nonmatch despite truthfully ranking a program #1. CONCLUSIONS: Each level of commitment statement resulted in a progressively increased frequency of a successful match to the recipient program. Only 5.4% of applicants committed to rank a program #1, but these statements were very reliable. These data can help program directors interpret commitment statements and assist accurate evaluation of the interest of candidates throughout the match process. © 2019 International Anesthesia Research Society.
Nationwide Clinical Practice Patterns of Anesthesiology Critical Care Physicians: A Survey to Members of the Society of Critical Care Anesthesiologists
BACKGROUND: Despite the growing contributions of critical care anesthesiologists to clinical practice, research, and administrative leadership of intensive care units (ICUs), relatively little is known about the subspecialty-specific clinical practice environment. An understanding of contemporary clinical practice is essential to recognize the opportunities and challenges facing critical care anesthesia, optimize staffing patterns, assess sustainability and satisfaction, and strategically plan for future activity, scope, and training. This study surveyed intensivists who are members of the Society of Critical Care Anesthesiologists (SOCCA) to evaluate practice patterns of critical care anesthesiologists, including compensation, types of ICUs covered, models of overnight ICU coverage, and relationships between these factors. We hypothesized that variability in compensation and practice patterns would be observed between individuals. METHODS: Board-certified critical care anesthesiologists practicing in the United States were identified using the SOCCA membership distribution list and invited to take a voluntary online survey between May and June 2021. Multiple-choice questions with both single- and multiple-select options were used for answers with categorical data, and adaptive questioning was used to clarify stem-based responses. Respondents were asked to describe practice patterns at their respective institutions and provide information about their demographics, salaries, effort in ICUs, as well as other activities. RESULTS: A total of 490 participants were invited to take this survey, and 157 (response rate 32%) surveys were completed and analyzed. The majority of respondents were White (73%), male (69%), and younger than 50 years of age (82%). The cardiothoracic/cardiovascular ICU was the most common practice setting, with 69.5% of respondents reporting time working in this unit. Significant variability was observed in ICU practice patterns. Respondents reported spending an equal proportion of their time in clinical practice in the operating rooms and ICUs (median, 40%; interquartile range [IQR], 20%-50%), whereas a smaller proportion - primarily those who completed their training before 2009 - reported administrative or research activities. Female respondents reported salaries that were $36,739 less than male respondents; however, this difference was not statistically different, and after adjusting for age and practice type, these differences were less pronounced (-$27,479.79; 95% confidence interval [CI], -$57,232.61 to $2273.03; P =.07). CONCLUSIONS: These survey data provide a current snapshot of anesthesiology critical care clinical practice patterns in the United States. Our findings may inform decision-making around the initiation and expansion of critical care services and optimal staffing patterns, as well as provide a basis for further work that focuses on intensivist satisfaction and burnout. © 2023 Lippincott Williams and Wilkins. All rights reserved.
A Contemporary Analysis of Medicolegal Issues in Obstetric Anesthesia between 2005 and 2015
BACKGROUND: Detailed reviews of closed malpractice claims have provided insights into the most common events resulting in litigation and helped improve anesthesia care. In the past 10 years, there have been multiple safety advancements in the practice of obstetric anesthesia. We investigated the relationship among contributing factors, patient injuries, and legal outcome by analyzing a contemporary cohort of closed malpractice claims where obstetric anesthesiology was the principal defendant. METHODS: The Controlled Risk Insurance Company (CRICO) is the captive medical liability insurer of the Harvard Medical Institutions that, in collaboration with other insurance companies and health care entities, contributes to the Comparative Benchmark System database for research purposes. We reviewed all (N = 106) closed malpractice cases related to obstetric anesthesia between 2005 and 2015 and compared the following classes of injury: maternal death and brain injury, neonatal death and brain injury, maternal nerve injury, and maternal major and minor injury. In addition, settled claims were compared to the cases that did not receive payment. χ2, analysis of variance, Student t test, and Kruskal-Wallis tests were used for comparison between the different classes of injury. RESULTS: The largest number of claims, 54.7%, involved maternal nerve injury; 77.6% of these claims did not receive any indemnity payment. Cases involving maternal death or brain injury comprised 15.1% of all cases and were more likely to receive payment, especially in the high range (P =.02). The most common causes of maternal death or brain injury were high neuraxial blocks, embolic events, and failed intubation. Claims for maternal major and minor injury were least likely to receive payment (P =.02) and were most commonly (34.8%) associated with only emotional injury. Compared to the dropped/denied/dismissed claims, settled claims more frequently involved general anesthesia (P =.03), were associated with delays in care (P =.005), and took longer to resolve (3.2 vs 1.3 years; P &lt;.0001). CONCLUSIONS: Obstetric anesthesia remains an area of significant malpractice liability. Opportunities for practice improvement in the area of severe maternal injury include timely recognition of high neuraxial block, availability of adequate resuscitative resources, and the use of advanced airway management techniques. Anesthesiologists should avoid delays in maternal care, establish clear communication, and follow their institutional policy regarding neonatal resuscitation. Prevention of maternal neurological injury should be directed toward performing neuraxial techniques at the lowest lumbar spine level possible and prevention/recognition of retained neuraxial devices. © 2019 International Anesthesia Research Society.
Developing a Real-Time Electroencephalogram-Guided Anesthesia-Management Curriculum for Educating Residents: A Single-Center Randomized Controlled Trial
BACKGROUND: Different anesthetic drugs and patient factors yield unique electroencephalogram (EEG) patterns. Yet, it is unclear how best to teach trainees to interpret EEG time series data and the corresponding spectral information for intraoperative anesthetic titration, or what effect this might have on outcomes. METHODS: We developed an electronic learning curriculum (ELC) that covered EEG spectrogram interpretation and its use in anesthetic titration. Anesthesiology residents at a single academic center were randomized to receive this ELC and given spectrogram monitors for intraoperative use versus standard residency curriculum alone without intraoperative spectrogram monitors. We hypothesized that this intervention would result in lower inhaled anesthetic administration (measured by age-adjusted total minimal alveolar concentration [MAC] fraction and age-adjusted minimal alveolar concentration [aaMAC]) to patients ≥60 old during the postintervention period (the primary study outcome). To study this effect and to determine whether the 2 groups were administering similar anesthetic doses pre- versus postintervention, we compared aaMAC between control versus intervention group residents both before and after the intervention. To measure efficacy in the postintervention period, we included only those cases in the intervention group when the monitor was actually used. Multivariable linear mixed-effects modeling was performed for aaMAC fraction and hospital length of stay (LOS; a non-prespecified secondary outcome), with a random effect for individual resident. A multivariable linear mixed-effects model was also used in a sensitivity analysis to determine if there was a group (intervention versus control group) by time period (post- versus preintervention) interaction for aaMAC. Resident EEG knowledge difference (a prespecified secondary outcome) was compared with a 2-sided 2-group paired t test. RESULTS: Postintervention, there was no significant aaMAC difference in patients cared for by the ELC group (n = 159 patients) versus control group (N = 325 patients; aaMAC difference = -0.03; 95% confidence interval [CI], -0.09 to 0.03; P =.32). In a multivariable mixed model, the interaction of time period (post- versus preintervention) and group (intervention versus control) led to a nonsignificant reduction of -0.05 aaMAC (95% CI, -0.11 to 0.01; P =.102). ELC group residents (N = 19) showed a greater increase in EEG knowledge test scores than control residents (N = 20) from before to after the ELC intervention (6-point increase; 95% CI, 3.50-8.88; P <.001). Patients cared for by the ELC group versus control group had a reduced hospital LOS (median, 2.48 vs 3.86 days, respectively; P =.024). CONCLUSIONS: Although there was no effect on mean aaMAC, these results demonstrate that this EEG-ELC intervention increased resident knowledge and raise the possibility that it may reduce hospital LOS. © 2022 Lippincott Williams and Wilkins. All rights reserved.
Distraction and interruption in anaesthetic practice
Background: Distractions are a potential threat to patient safety. Previous research has focused on parts of the anaesthetic process but not on entire cases, and has focused on hazards rather than existing defences against error. Methods: We observed anaesthetists at work in the operating theatre and quantified and classified the distracting events occurring. We also conducted semi-structured interviews with consultant anaesthetists to explore existing strategies for managing distractions. Results: We observed 30 entire anaesthetics in a variety of surgical settings, with a total observation time of 31 h 2 min. We noted 424 distracting events. The average frequency of distracting events, per minute, was 0.23 overall, with 0.29 during induction, 0.33 during transfer into theatre, 0.15 during maintenance, and 0.5 during emergence. Ninety-two (22%) events were judged to have a negative effect, and 14 (3.3%) positive. Existing strategies for managing distractions included ignoring inappropriate intrusions or conversation; asking staff with non-urgent matters to return later at a quieter time; preparation and checking of drugs and equipment ahead of time; acting as an example to other staff in timing their own potentially distracting actions; and being aware of one's own emotional and cognitive state. Conclusions: Distractions are common in anaesthetic practice and managing them is a key professional skill which appears to be part of the tacit knowledge of anaesthesia. Anaesthetists should also bear in mind that the potential for distraction is mutual and reciprocal and their actions can also threaten safety by interrupting other theatre staff. © 2012 The Author [2012].
Development of a scheduled drug diversion surveillance system based on an analysis of atypical drug transactions
BACKGROUND: Drug diversion in the operating room (OR) by anesthesia providers is a recognized problem with significant morbidity and mortality. Use of anesthesia drug dispensing systems in ORs, coupled with the presence of anesthesia or OR information management systems, may allow detection through database queries screening for atypical drug transactions. Although such transactions occur innocently during the course of normal clinical care, many are suspicious for diversion. METHODS: We used a data mining approach to search for possible indicators of diversion by querying our information system databases. Queries were sought that identified our two known cases of drug diversion and their onset. A graphical approach was used to identify outliers, with diversion subsequently assessed through a manual audit of transactions. RESULTS: Frequent transactions on patients after the end of their procedures, and on patients having procedures in locations different from that of the dispensing machine, identified our index cases. In retrospect, had we been running the surveillance system at the time, diversion would have been detected earlier than actually recognized. CONCLUSIONS: Identification of the frequent occurrence of atypical drug transactions from automated drug dispensing systems using database queries is a potentially useful method to detect drug diversion in the OR by anesthesia providers. © 2007 by International Anesthesia Research Society.
Evaluating the requirements of electroencephalograph instruction for anesthesiology residents
BACKGROUND: During a 1-mo neurosurgical intensive care unit rotation, anesthesiology residents interpret electroencephalograms (EEGs) performed throughout the institution, including intraoperative EEGs. The curriculum goal is to increase familiarity with EEG use and interpretation with 20 EEG interpretations with a clinical neurophysiologist during this rotation. We aimed to determine whether the EEG curriculum goals could be achieved with fewer EEG interpretations. METHODS: Each anesthesiology resident who participated interpreted 20 EEGs throughout the rotation. Using a 25-question evaluation tool, anesthesiology residents were assessed before interpreting any EEGs with a clinical neurophysiologist and reassessed after 10, 15, and 20 EEG interpretations. Each 25-item evaluation tool was developed to assess the impact of this EEG curriculum to gain experience with EEG monitoring and anesthetic effects using EEG tracings, and clinical EEG interpretation. RESULTS: Eight residents completed the study. Mean scores improved from 8.00 ± 2.51 at baseline to 15.12 ± 3.00 (P < 0.001), 15.88 ± 3.18 (P < 0.001), and 18.12 ± 3.23 (P < 0.001) after 10, 15, and 20 EEG interpretations. DISCUSSION: This innovative, collaborative approach using the expertise of the clinical neurophysiologist met the curriculum goals after 10 supervised EEG interpretations, as measured by the study assessment tool. Copyright © 2009 International Anesthesia Research Society.
Do new anesthesia ventilators deliver small tidal volumes accurately during volume-controlled ventilation?
BACKGROUND: During mechanical ventilation of infants and neonates, small changes in tidal volume may lead to hypo- or hyperventilation, barotrauma, or volutrauma. Partly because breathing circuit compliance and fresh gas flow affect tidal volume delivery by traditional anesthesia ventilators in volume-controlled ventilation (VCV) mode, pressure-controlled ventilation (PCV) using a circle breathing system has become a common approach to minimizing the risk of mechanical ventilation for small patients, although delivered tidal volume is not assured during PCV. A new generation of anesthesia machine ventilators addresses the problems of VCV by adjusting for fresh gas flow and for the compliance of the breathing circuit. In this study, we evaluated the accuracy of new anesthesia ventilators to deliver small tidal volumes. METHODS: Four anesthesia ventilator systems were evaluated to determine the accuracy of volume delivery to the airway during VCV at tidal volume settings of 100, 200, and 500 mL under different conditions of breathing circuit compliance (fully extended and fully contracted circuits) and lung compliance. A mechanical test lung (adult and infant) was used to simulate lung compliances ranging from 0.0025 to 0.03 L/cm H2O. Volumes and pressures were measured using a calibrated screen pneumotachograph and custom software. We tested the Smartvent 7900, Avance, and Aisys anesthesia ventilator systems (GE Healthcare, Madison, WI) and the Apollo anesthesia ventilator (Draeger Medical, Telford, PA). The Smartvent 7900 and Avance ventilators use inspiratory flow sensors to control the volume delivered, whereas the Aisys and Apollo ventilators compensate for the compliance of the circuit. RESULTS: We found that the anesthesia ventilators that use compliance compensation (Aisys and Apollo) accurately delivered both large and small tidal volumes to the airway of the test lung under conditions of normal and low lung compliance during VCV (ranging from 95.5% to 106.2% of the set tidal volume). However, the anesthesia ventilators without compliance compensation were less accurate in delivering the set tidal volume during VCV, particularly at lower volumes and lower lung compliances (ranging from 45.6% to 100.3% of the set tidal volume). CONCLUSIONS: Newer generation anesthesia machine ventilators that compensate for breathing circuit compliance and for fresh gas flow are able to deliver small tidal volumes accurately to the airway under conditions of normal and low lung compliance during volume-controlled ventilation. Accurate VCV may be a useful alternative to PCV, as volume is guaranteed when lung compliance changes, and new strategies such as small volume/lung protective ventilation become possible in the operating room. © 2008 by International Anesthesia Research Society.
Training novice anaesthesiology trainees to speak up for patient safety
Background: Effectively communicating patient safety concerns in the operating theatre is crucial, but novice trainees often struggle to develop effective speaking up behaviour. Our primary objective was to test whether repeated simulation-based practice helps trainees speak up about patient management concerns. We also tested the effect of an additional didactic intervention over standard simulation education. Methods: This prospective observational study with a nested double-blind, randomised controlled component took place during a week-long simulation boot camp. Participants were randomised to receive simulation education (SE), or simulation education plus a didactic session on speaking up behaviour (SE+). Outcome measures were: changes in intrapersonal factors for speaking up (self-efficacy, social outcome expectations, and assertiveness), and speaking up performance during four simulated scenarios. Participants self-reported intrapersonal factors and blinded observers scored speaking up behaviour. Cognitive burden for each simulation was also measured using the National Aeronautics and Space Administration Task Load Index. Mixed-design analysis of variance was used to analyse scores. Results: Twenty-two participants (11 per group) were included. There was no significant interaction between group and time for any outcome measure. There was a main effect for time for self-efficacy (P<0.001); for social outcome expectations (P<0.001); for assertive attitude (P=0.003); and for speaking up scores (P=0.001). The SE+ group's assertive attitude scores increased at follow-up whereas the SE group reverted to near baseline scores (P=0.025). Conclusions: In novice anaesthesia trainees, intrapersonal factors and communication performance benefit from repeated simulation training. Focused teaching may help trainees develop assertive behaviours. © 2019 British Journal of Anaesthesia
Evaluation of a mandatory quality assurance data capture in anesthesia: A secure electronic system to capture quality assurance information linked to an automated anesthesia record
BACKGROUND: Efforts to assure high-quality, safe, clinical care depend upon capturing information about near-miss and adverse outcome events. Inconsistent or unreliable information capture, especially for infrequent events, compromises attempts to analyze events in quantitative terms, understand their implications, and assess corrective efforts. To enhance reporting, we developed a secure, electronic, mandatory system for reporting quality assurance data linked to our electronic anesthesia record. METHODS: We used the capabilities of our anesthesia information management system (AIMS) in conjunction with internally developed, secure, intranet-based, Web application software. The application is implemented with a backend allowing robust data storage, retrieval, data analysis, and reporting capabilities. We customized a feature within the AIMS software to create a hard stop in the documentation workflow before the end of anesthesia care time stamp for every case. The software forces the anesthesia provider to access the separate quality assurance data collection program, which provides a checklist for targeted clinical events and a free text option. After completing the event collection program, the software automatically returns the clinician to the AIMS to finalize the anesthesia record. RESULTS: The number of events captured by the departmental quality assurance office increased by 92% (95% confidence interval [CI] 60.4%-130%) after system implementation. The major contributor to this increase was the new electronic system. This increase has been sustained over the initial 12 full months after implementation. Under our reporting criteria, the overall rate of clinical events reported by any method was 471 events out of 55,382 cases or 0.85% (95% CI 0.78% to 0.93%). The new system collected 67% of these events (95% confidence interval 63%-71%). CONCLUSION: We demonstrate the implementation in an academic anesthesia department of a secure clinical event reporting system linked to an AIMS. The system enforces entry of quality assurance information (either no clinical event or notification of a clinical event). System implementation resulted in capturing nearly twice the number of events at a relatively steady case load. Copyright © 2011 International Anesthesia Research Society.
The reliability and accuracy of a noncontact electrocardiograph system for screening purposes
BACKGROUND: Electrocardiography (ECG) requires the application of electrodes to the skin and often necessitates undressing. Capacitively coupled electrodes embedded in a normal chair would be a rational alternative for ECG screening. We evaluated the reliability and accuracy of ECG electrodes imbedded in a chair cushion. METHODS: Two independent clinicians compared ECG recordings obtained using skin electrodes with recordings obtained using capacitively coupled electrodes that were embedded in a chair cushion in an anesthesiology premedication room, a cardiology outpatient ward, and a cardiology day ward. We analyzed the data to compare the sensitivity and specificity for the diagnosis of cardiac arrhythmias. RESULTS: ECG recordings were obtained from 107 patients. Heart rate was accurately measured using the capacitively coupled electrodes, but motion artifacts made the identification of P and T waves unreliable. Signal quality was poor for patients with low body weight, patients wearing clothing containing mixed fibers, and patients wearing sweaty shirts. CONCLUSIONS: Heart rate was accurately measured, and some cardiac arrhythmias were correctly diagnosed using capacitive ECG electrodes. Capacitive electrodes embedded into an examination chair are a promising tool for preoperative screening. Improved artifact reduction algorithms are needed before capacitive electrodes will replace skin electrodes. Copyright © 2012 International Anesthesia Research Society.
Practical training of anesthesia clinicians in electroencephalogram-based determination of hypnotic depth of general anesthesia
BACKGROUND: Electroencephalographic (EEG) brain monitoring during general anesthesia provides information on hypnotic depth. We hypothesized that anesthesia clinicians could be trained rapidly to recognize typical EEG waveforms occurring with volatile-based general anesthesia. METHODS: This was a substudy of a trial testing the hypothesis that EEG-guided anesthesia prevents postoperative delirium. The intervention was a 35-minute training session, summarizing typical EEG changes with volatile-based anesthesia. Participants completed a preeducational test, underwent training, and completed a posteducational test. For each question, participants indicated whether the EEG was consistent with (1) wakefulness, (2) non–slow-wave anesthesia, (3) slow-wave anesthesia, or (4) burst suppression. They also indicated whether the processed EEG (pEEG) index was discordant with the EEG waveforms. Four clinicians, experienced in intraoperative EEG interpretation, independently evaluated the EEG waveforms, resolved disagreements, and provided reference answers. Ten questions were assessed in the preeducational test and 9 in the posteducational test. RESULTS: There were 71 participants; 13 had previous anesthetic-associated EEG interpretation training. After training, the 58 participants without prior training improved at identifying dominant EEG waveforms (median 60% with interquartile range [IQR], 50%–70% vs 78% with IQR, 67%–89%; difference: 18%; 95% confidence interval [CI], 8–27; P < .001). In contrast, there was no significant improvement following the training for the 13 participants who reported previous training (median 70% with IQR, 60%–80% vs 67% with IQR, 67%–78%; difference: −3%; 95% CI, −18 to 11; P = .88). The difference in the change between the pre- and posteducational session for the previously untrained versus previously trained was statistically significant (difference in medians: 21%; 95% CI, 2–28; P = .005). Clinicians without prior training also improved in identifying discordance between the pEEG index and the EEG waveform (median 60% with IQR, 40%–60% vs median 100% with IQR, 75%–100%; difference: 40%; 95% CI, 30–50; P < .001). Clinicians with prior training showed no significant improvement (median 60% with IQR, 60%–80% vs 75% with IQR, 75%–100%; difference: 15%; 95% CI, −16 to 46; P = .16). Regarding the identification of discordance, the difference in the change between the pre- and posteducational session for the previously untrained versus previously trained was statistically significant (difference in medians: 25%; 95% CI, 5–45; P = .012). CONCLUSIONS: A brief training session was associated with improvements in clinicians without prior EEG training in (1) identifying EEG waveforms corresponding to different hypnotic depths and (2) recognizing when the hypnotic depth suggested by the EEG was discordant with the pEEG index. Copyright © 2019 International Anesthesia Research Society
The success of emergency endotracheal intubation in trauma patients: A 10-year experience at a major adult trauma referral center
BACKGROUND: Emergency airway management is a required skill for many anesthesiologists. We studied 10 yr of experience at a Level 1 trauma center to determine the outcomes of tracheal intubation attempts within the first 24 h of admission. METHODS: We examined Trauma Registry, quality management, and billing system records from July 1996 to June 2006 to determine the number of patients requiring intubation within 1 h of hospital arrival and to estimate the number requiring intubation with the first 24 h. We reviewed the medical record of each patient in either cohort who underwent a surgical airway access procedure (tracheotomy or cricothyrotomy) to determine the presenting characteristics of the patients and the reason they could not be orally or nasally intubated. RESULTS: All intubation attempts were supervised by an anesthesiologist experienced in trauma patient care. Rapid sequence intubation with direct laryngoscopy was the standard approach throughout the study period. During the first hour after admission, 6088 patients required intubation, of whom 21 (0.3%) received a surgical airway. During the first 24 h, 10 more patients, for a total of 31, received a surgical airway, during approximately 32,000 attempts (0.1%). Unanticipated difficult upper airway anatomy was the leading reason for a surgical airway. Four of the 31 patients died of their injuries but none as the result of failed intubation. CONCLUSIONS: In the hands of experienced anesthesiologists, rapid sequence intubation followed by direct laryngoscopy is a remarkably effective approach to emergency airway management. An algorithm designed around this approach can achieve very high levels of success. Copyright © 2009 International Anesthesia Research Society.
Trends in perioperative practice and resource utilization in patients with obstructive sleep apnea undergoing joint arthroplasty
BACKGROUND: Emerging evidence associating obstructive sleep apnea (OSA) with adverse perioperative outcomes has recently heightened the level of awareness among perioperative physicians. In particular, estimates projecting the high prevalence of this condition in the surgical population highlight the necessity of the development and adherence to "best clinical practices." In this context, a number of expert panels have generated recommendations in an effort to provide guidance for perioperative decision-making. However, given the paucity of insights into the status of the implementation of recommended practices on a national level, we sought to investigate current utilization, trends, and the penetration of OSA care-related interventions in the perioperative management of patients undergoing lower joint arthroplasties. METHODS: In this population-based analysis, we identified 1,107,438 (Premier Perspective database; 2006-2013) cases of total hip and knee arthroplasties and investigated utilization and temporal trends in the perioperative use of regional anesthetic techniques, blood oxygen saturation monitoring (oximetry), supplemental oxygen administration, positive airway pressure therapy, advanced monitoring environments, and opioid prescription among patients with and without OSA. RESULTS: The utilization of regional anesthetic techniques did not differ by OSA status and overall <25% and 15% received neuraxial anesthesia and peripheral nerve blocks, respectively. Trend analysis showed a significant increase in peripheral nerve block use by >50% and a concurrent decrease in opioid prescription. Interestingly, while the absolute number of patients with OSA receiving perioperative oximetry, supplemental oxygen, and positive airway pressure therapy significantly increased over time, the proportional use significantly decreased by approximately 28%, 36%, and 14%, respectively. A shift from utilization of intensive care to telemetry and stepdown units was seen. CONCLUSIONS: On a population-based level, the implementation of OSA-targeted interventions seems to be limited with some of the current trends virtually in contrast to practice guidelines. Reasons for these findings need to be further elucidated, but observations of a dramatic increase in absolute utilization with a proportional decrease may suggest possible resource constraints as a contributor. © 2017 International Anesthesia Research Society.
The Dynamics of Enterococcus Transmission from Bacterial Reservoirs Commonly Encountered by Anesthesia Providers
BACKGROUND: Enterococci, the second leading cause of health care-associated infections, have evolved from commensal and harmless organisms to multidrug-resistant bacteria associated with a significant increase in patient morbidity and mortality. Prevention of ongoing spread of this organism within and between hospitals is important. In this study, we characterized Enterococcus transmission dynamics for bacterial reservoirs commonly encountered by anesthesia providers during the routine administration of general anesthesia. METHODS: Enterococcus isolates previously obtained from bacterial reservoirs frequently encountered by anesthesiologists (patient nasopharynx and axilla, anesthesia provider hands, and the adjustable pressure-limiting valve and agent dial of the anesthesia machine) at 3 major academic medical centers were identified as possible intraoperative bacterial transmission events by class of pathogen, temporal association, and phenotypic analysis (analytical profile indexing). They were then subjected to antibiotic disk diffusion sensitivity for transmission event confirmation. Isolates involved in confirmed transmission events were further analyzed to characterize the frequency, mode, origin, location of transmission events, and antibiotic susceptibility of transmitted pathogens. RESULTS: Three hundred eighty-nine anesthesia reservoir isolates were previously identified by gross morphology and simple rapid tests as Enterococcus. The combination of further analytical profile indexing analysis and temporal association implicated 43% (166/389) of those isolates in possible intraoperative bacterial transmission events. Approximately, 30% (49/166) of possible transmission events were confirmed by additional antibiotic disk diffusion analysis. Two phenotypes, E5 and E7, explained 80% (39/49) of confirmed transmission events. For both phenotypes, provider hands were a common reservoir of origin proximal to the transmission event (96% [72/75] hand origin for E7 and 89% [50/56] hand origin for E5) and site of transmission (94% [16/17] hand transmission location for E7 and 86% [19/22] hand transmission location for E5). CONCLUSIONS: Anesthesia provider hand contamination is a common proximal source and transmission location for Enterococcus transmission events in the anesthesia work area. Future work should evaluate the impact of intraoperative hand hygiene improvement strategies on the dynamics of intraoperative Enterococcus transmission. © 2015 International Anesthesia Research Society.
Agreement between trainees and supervisors on first-year entrustable professional activities for anaesthesia training
Background: Entrustable professional activities (EPAs) are commonly developed by senior clinicians and education experts. However, if postgraduate training is conceptualised as an educational alliance, the perspective of trainees should be included. This raises the question as to whether the views of trainees and supervisors on entrustability of specific EPAs differ, which we aimed to explore. Methods: A working group, including all stakeholders, selected and drafted 16 EPAs with the potential for unsupervised practice within the first year of training. For each EPA, first-year trainees, advanced trainees, and supervisors decided whether it should be possible to attain trust for unsupervised practice by the end of the first year of anaesthesiology training (i.e. whether the respective EPA qualified as a ‘first-year EPA’). Results: We surveyed 23 first-year trainees, 47 advanced trainees, and 51 supervisors (overall response rate: 68%). All groups fully agreed upon seven EPAs as ‘first-year EPAs’ and on four EPAs that should not be entrusted within the first year. For all five remaining EPAs, a significantly higher proportion of first-year trainees thought these should be entrusted as first-year EPAs compared with advanced trainees and supervisors. We found no differences between advanced trainees and supervisors. Conclusions: The views of first-year trainees, advanced trainees, and supervisors showed high agreement. Differing views of young trainees disappeared after the first year. This finding provides a fruitful basis to involve trainees in negotiations of autonomy. © 2020 British Journal of Anaesthesia
A randomized trial comparing the effect of fiberoptic selection and guidance versus random selection, blind insertion, and direct laryngoscopy, on the incidence and severity of epistaxis after nasotracheal intubation
BACKGROUND: Epistaxis, or nasal bleeding, is a common complication after nasotracheal intubation (NTI). Because such bleeding is likely related to trauma during intubation, use of fiberoptic visualization and guidance rather than direct laryngoscopy may affect the incidence and severity of epistaxis. We compared the incidence of epistaxis after NTI using a fiberoptic versus a direct laryngoscopy approach. METHODS: Seventy patients who were able to breathe easily through unobstructed nostrils and required NTI as part of their anesthetic management were recruited. Exclusion criteria included unequal nasal airflow, nostril obstruction, previous nasal trauma or surgery, and coagulation abnormalities as determined by history. Patients were randomly assigned to undergo NTI with thermosoftened Mallinckrodt nasal Ring-Adair-Elwyn (RAE) tubes via either traditional direct laryngoscopy using a Macintosh blade or fiberoptic nasal intubation. All patients first underwent anesthetic induction and were randomized to blind or fiberoptic groups. Patients in the blind insertion/direct laryngoscopy group were then intubated via a randomly selected nostril. Patients in the fiberoptic group underwent an asleep nasal fiberoptic examination to determine the most patent nostril, followed by tube insertion under fiberoptic guidance. Ten minutes after NTI, the incidence and severity of epistaxis were evaluated and graded by the surgeon, who was blinded to the intubation method. RESULTS: Initial nasal fiberoptic endoscopy identified asymptomatic nasal pathology in 51% of patients: inferior turbinate hypertrophy (28.6%) and deviation of the nasal septum in (22.8%). The incidence of epistaxis was higher in the blind insertion/direct laryngoscopy group (88%) than in the fiberoptic group (51%; relative risk, 0.55; 95% confidence interval, 0.38-0.79; P =.0011). The severity of bleeding was also greater in the blind tube insertion/direct laryngoscopy cohort (Wilcoxon Mann-Whitney odds, 3.5; 95% confidence interval, 1.8-11.1). CONCLUSIONS: Fiberoptic nostril selection and guidance during NTI reduced the incidence and severity of epistaxis when compared with NTI performed via blind insertion and direct laryngoscopy. Copyright © 2018 International Anesthesia Research Society.
Adherence to the European Society of Cardiology/European Society of Anaesthesiology recommendations on preoperative cardiac testing and association with positive results and cardiac events: a cohort study
Background: European Society of Cardiology/European Society of Anaesthesiology (ESC/ESA) guidelines inform cardiac workup before noncardiac surgery based on an algorithm. Our primary hypotheses were that there would be associations between (i) the groups stratified according to the algorithms and major adverse cardiac events (MACE), and (ii) over- and underuse of cardiac testing and MACE. Methods: This is a secondary analysis of a multicentre prospective cohort. Major adverse cardiac events were a composite of cardiac death, myocardial infarction, acute heart failure, and life-threatening arrhythmia at 30 days. For each cardiac test, pathological findings were defined a priori. We used multivariable logistic regression to measure associations. Results: We registered 359 MACE at 30 days amongst 6976 patients; classification in a higher-risk group using the ESC/ESA algorithm was associated with 30-day MACE; however, discrimination of the ESC/ESA algorithms for 30-day MACE was modest; area under the curve 0.64 (95% confidence interval: 0.61–0.67). After adjustment for sex, age, and ASA physical status, discrimination was 0.72 (0.70–0.75). Overuse or underuse of cardiac tests were not consistently associated with MACE. There was no independent association between test recommendation class and pathological findings (P=0.14 for stress imaging; P=0.35 for transthoracic echocardiography; P=0.52 for coronary angiography). Conclusions: Discrimination for MACE using the ESC/ESA guidelines algorithms was limited. Overuse or underuse of cardiac tests was not consistently associated with cardiovascular events. The recommendation class of preoperative cardiac tests did not influence their yield. Clinical trial registration: NCT02573532. © 2021 British Journal of Anaesthesia
A comparison of plaintiff and defense expert witness qualifications in malpractice litigation in anesthesiology
BACKGROUND: Expert witnesses serve a crucial role in the medicolegal system, interpreting evidence so that it can be understood by jurors. Guidelines have been established by both the legal community and professional medical societies detailing the expectations of expert witnesses. The primary objective of this analysis was to evaluate the expertise of anesthesiologists testifying as expert witnesses in malpractice litigation. METHODS: The WestlawNext legal database was searched for cases over the last 5 years in which anesthesiologists served as expert witnesses. Internet searches were used to identify how long each witness had been in practice. Departmental websites, the Scopus database, and state medical licensing boards were used to measure scholarly impact (via the h-index) and determine whether the witness was a full-time faculty member in academia. RESULTS: Anesthesiologists testifying in 295 cases since 2008 averaged over 30 years of experience per person (mean ± SEM, defense, 33.4 ± 0.7, plaintiff, 33.1 ± 0.6, P = 0.76). Individual scholarly impact, as measured by h-index, was found to be lower among plaintiff experts (mean ± SEM, 4.8 ± 0.5) than their defendant counterparts (mean ± SEM, 8.1 ± 0.8; P = 0.02). A greater proportion of defense witnesses were involved in academic practice (65.7% vs 54.8%, P = 0.04). CONCLUSIONS: Anesthesiologists testifying for both sides are very experienced. Defense expert witnesses are more likely to have a higher scholarly impact and to practice in an academic setting. This indicates that defense expert witnesses may have greater expertise than plaintiff expert witnesses. © 2015 International Anesthesia Research Society.
Anesthesia residents' global (Departmental) evaluation of faculty anesthesiologists' supervision can be less than their average evaluations of individual anesthesiologists
BACKGROUND: Faculty anesthesiologists' supervision of anesthesiology residents is required for both postgraduate medical education and billing compliance. Previously, using the de Oliveira Filho et al. supervision question set, De Oliveira et al. found that residents who reported mean department-wide supervision scores <3.0 ("frequent") reported a significantly more frequent occurrence of mistakes with negative consequences to patients, as well as medication errors. In our department, residents provide daily evaluations of the supervision received by individual faculty. Using a survey study, we compared relationships between residents' daily supervision scores for individual faculty anesthesiologists and residents' supervision scores for the entire department (comprised these faculty). METHODS: We studied all anesthesiology residents in clinical years 1, 2, and 3 (i.e., neither in the "base year" nor in fellowship). There were daily evaluations of individual faculty supervision of operative anesthesia for 36 weeks. Residents clicked a hyperlink on the invitation e-mail taking them to a secure Web page to provide their global (departmental) assessment of faculty supervision. We calculated the ratio of each resident's global (departmental) faculty supervision score (i.e., mean among 9 questions × 1 evaluation) to the same resident's daily evaluations of individual faculty (i.e., mean among 9 questions × many evaluations). RESULTS: All 39 of 39 residents chose to participate. The mean departmental supervision score was significantly less (P < 0.0001) than the mean of individual faculty scores. The median ratio of scores was 86% (95% confidence interval, 83%-89%). Kendall's rank correlation between global and (mean) individual faculty scores was τb = 0.34 ± 0.11 (P = 0.0032). The ratios were uniformly distributed (P = 0.64) between the observed minimums and maximums; were not correlated with the mean value of individual faculty scores previously provided by each resident (P = 0.64); were not correlated with the number of individual faculty evaluations previously provided by each resident (P = 0.49); and did not differ among the first, second, or third year residents (P = 0.37). CONCLUSIONS: Residents' perceptions of overall (departmental) faculty supervision were less than overall averages of their perceptions of individual faculty supervision. This should be considered when interpreting national survey results (e.g., of patient safety), residency program evaluations, and individual faculty anesthesiologist performance. © 2014 International Anesthesia Research Society.
Risk factors associated with fast-track ineligibility after monitored anesthesia care in ambulatory surgery patients
BACKGROUND: Fast-tracking after ambulatory anesthesia has been advocated as a pathway to improve efficiency and maximize resources without compromising patient safety and satisfaction. Studies reporting successful fast-tracking focus primarily on anesthesia techniques and not on specific patient factors, surgical procedure, or process variables associated with unsuccessful fast-tracking. We performed this retrospective study to implement a process for improving fast-tracking, measure change over time, and identify variables associated with patients unable to fast-track successfully after monitored anesthesia care. METHODS: A fast-track protocol for all patients receiving monitored anesthesia care based on the Modified Aldrete Score was instituted. It consisted of written policy changes and weekly review at physician and nursing department meetings for the first month, followed by monthly feedback during a 6-mo intervention period. Data collected for a 3-mo baseline and the consecutive 6-mo intervention period included fast-track status, surgical service and procedure, surgeon and anesthesiology provider, age, gender, ASA status, total time in operating room, and total postoperative time (end of surgery to actual discharge). RESULTS: Three hundred and thirty-two cases were completed during the 3-mo baseline period, and 641 cases were completed during the 6-mo intervention period. Fast-track success rate improved from 23% to 56%, P < 0.001. Independent risk factors for fast-track ineligibility identified by multivariate regression analysis were significant for patients <60 yr-old, ASA III versus I, general surgery versus orthopedics and ophthalmology, month after implementation, and total postoperative time. Total postoperative time was significantly shorter by 64 min in the fast-track group, P < 0.001. CONCLUSION: Fast-track success rate can be improved and sustained over time by education and personnel feedback. We identified risk factors that were significantly associated with fast-track ineligibility. If those factors are found to be associated with fast-track ineligibility in a prospective investigation, they should enable development of multidisciplinary patient and procedure-specific guidelines for fast-tracking. © 2008 by International Anesthesia Research Society.
A bibliometric analysis of global clinical research by anesthesia departments
BACKGROUND: Few studies have investigated the diversity in research conducted by anesthesia-based researchers. We examined global clinical research attributed to anesthesia departments using Medline® and Ovid® databases. We also investigated the impact of economic development on national academic productivity. METHODS: We conducted a Medline search for English-language publications from 2000 to 2005. The search included only clinical research in which institutional affiliation included words relating to anesthesia (e.g., anesthesiology, anesthesia, etc.). Population and gross national income data were obtained from publicly available databases. Impact factors for journals were obtained from Journal Citation Reports (Thomson Scientific). RESULTS: There were 6736 publications from 64 countries in 551 journals. About 85% of all publications were represented by 46 journals. Randomized controlled trials constituted 4685 (70%) of publications. Turkey had the highest percentage of randomized controlled trials (88%). The United States led the field in quantity (20% of total) and mean impact factor (3.0) of publications. Finland had the highest productivity when adjusted for population (36 publications per million population). Publications from the United States declined from 23% in 2000 to 17% in 2005. CONCLUSIONS: Clinical research attributable to investigators in our specialty is diverse, and extends beyond the traditional field of anesthesia and intensive care. The United States produces the most clinical research, but per capita output is higher in European nations. © 2007 by International Anesthesia Research Society.
Use of survey and delphi process to understand trauma anesthesia care practices
BACKGROUND: Few trauma guidelines evaluate and recommend anesthesiology practices and there are no trauma anesthesia-specific guidelines. There is no information on how anesthesiologists perceive clinical practice patterns. Our objective was to understand the perceptions of anesthesiologists regarding trauma anesthesia practices. METHODS: A survey assessing anesthesia management of trauma patients was distributed to 21,491 anesthesiologists. A subset of 10 of these questions was subsequently reviewed by a trauma anesthesiology focus group through a 3-round web-based Delphi process. A question was deemed to have respondent consensus if the response with the highest percentage of agreement was unchanged between rounds 1 and 2. RESULTS: A total of 2360 anesthesiologists (11% response rate) responded to the survey. Results demonstrated that the practitioners’ answers conflicted with existing surgical trauma society recommendations (ie, when to transfuse component therapy), and several areas that lacked any guidelines, resulted in response variability among anesthesiologists where not 1 answer achieved >75% agreement (ie, intubation technique of choice for patients with uncleared cervical spine). Thirteen trauma anesthesiologists participated in round 1 (response rate 100%), and 12 responded in rounds 2 and 3 (response rate 92%) of the Delphi process. None of the questions received 100% agreement. Consensus was achieved on 9 of 10 statements pertaining to trauma anesthesia care. Consensus was not reached on the intubating technique in a hemodynamically unstable patient with an uncleared cervical spine with deficits. Delphi participant opinion conflicted with existing guidelines on 2 statements: the use of cricoid pressure, and when to begin blood component therapy. CONCLUSIONS: There are several important areas of trauma anesthesia practice where guidelines do not exist and several where existing guidelines are not endorsed by the majority of practitioners who completed our survey. The lack of consensus on trauma anesthesia management and the variation in survey responses demonstrate a need to develop evidence-based trauma anesthesia guidelines. (Anesth Analg 2018;126:1580–7. Copyright © 2018 International Anesthesia Research Society.
Determination of Geolocations for Anesthesia Specialty Coverage and Standby Call Allowing Return to the Hospital Within a Specified Amount of Time
BACKGROUND: For emergent procedures, in-house teams are required for immediate patient care. However, for many procedures, there is time to bring in a call team from home without increasing patient morbidity. Anesthesia providers taking subspecialty or backup call from home are required to return to the hospital within a designated number of minutes. Driving times to the hospital during the hours of call need to be considered when deciding where to live or to visit during such calls. Distance alone is an insufficient criterion because of variable traffic congestion and differences in highway access. We desired to develop a simple, inexpensive method to determine postal codes surrounding hospitals allowing a timely return during the hours of standby call. METHODS: Pessimistic travel times and driving distances were calculated using the Google distance matrix application programming interface for all N = 136 postal codes within 60 great circle ("straight line") miles of the University of Miami Hospital (Miami, FL) during all 108 weekly standby call hours. A postal code was acceptable if the estimated longest driving time to return to the hospital was ≤60 minutes (the anesthesia department's service commitment to start an urgent case during standby call). Linear regression (with intercept = 0) minimizing the mean absolute percentage difference between the distances (great circle and driving) and the pessimistic driving times to return to the hospital was performed among all 136 postal codes. Implementation software written in Python is provided. RESULTS: Postal codes allowing return to the studied hospital within the specified interval were identified. The linear regression showed that driving distances correlated poorly with the longest driving time to return to the hospital among the 108 weekly call hours (mean absolute percentage error = 25.1% ± 1.7% standard error [SE]; N = 136 postal codes). Great circle distances also correlated poorly (mean absolute percentage error = 28.3% ± 1.9% SE; N = 136). Generalizability of the method was determined by successful application to a different hospital in a rural state (University of Iowa Hospital). CONCLUSIONS: The described method allows identification of postal codes surrounding a hospital in which personnel taking standby call could be located and be able to return to the hospital during call hours on every day of the week within any specified amount of time. For areas at the perimeter of the acceptability, online distance mapping applications can be used to check driving times during the hours of standby call. © 2019 International Anesthesia Research Society.
Modeling procedure and surgical times for current procedural terminology-anesthesia-surgeon combinations and evaluation in terms of case-duration prediction and operating room efficiency: A multicenter study
BACKGROUND: Gains in operating room (OR) scheduling may be obtained by using accurate statistical models to predict surgical and procedure times. The 3 main contributions of this article are the following: (i) the validation of Strum's results on the statistical distribution of case durations, including surgeon effects, using OR databases of 2 European hospitals, (ii) the use of expert prior expectations to predict durations of rarely observed cases, and (iii) the application of the proposed methods to predict case durations, with an analysis of the resulting increase in OR efficiency. METHODS: We retrospectively reviewed all recorded surgical cases of 2 large European teaching hospitals from 2005 to 2008, involving 85,312 cases and 92,099 h in total. Surgical times tended to be skewed and bounded by some minimally required time. We compared the fit of the normal distribution with that of 2- and 3-parameter lognormal distributions for case durations of a range of Current Procedural Terminology (CPT)-anesthesia combinations, including possible surgeon effects. For cases with very few observations, we investigated whether supplementing the data information with surgeons' prior guesses helps to obtain better duration estimates. Finally, we used best fitting duration distributions to simulate the potential efficiency gains in OR scheduling. RESULTS: The 3-parameter lognormal distribution provides the best results for the case durations of CPT-anesthesia (surgeon) combinations, with an acceptable fit for almost 90% of the CPTs when segmented by the factor surgeon. The fit is best for surgical times and somewhat less for total procedure times. Surgeons' prior guesses are helpful for OR management to improve duration estimates of CPTs with very few (<10) observations. Compared with the standard way of case scheduling using the mean of the 3-parameter lognormal distribution for case scheduling reduces the mean overreserved OR time per case up to 11.9 (11.8-12.0) min (55.6%) and the mean underreserved OR time per case up to 16.7 (16.5-16.8) min (53.1%). When scheduling cases using the 4-parameter lognormal model the mean overutilized OR time is up to 20.0 (19.7-20.3) min per OR per day lower than for the standard method and 11.6 (11.3-12.0) min per OR per day lower as compared with the biased corrected mean. CONCLUSIONS: OR case scheduling can be improved by using the 3-parameter lognormal model with surgeon effects and by using surgeons' prior guesses for rarely observed CPTs. Using the 3-parameter lognormal model for case-duration prediction and scheduling significantly reduces both the prediction error and OR inefficiency. © 2009 by International Anesthesia Research Society.
Status of Women in Academic Anesthesiology: A 10-Year Update
BACKGROUND: Gender inequity is still prevalent in today’s medical workforce. Previous studies have investigated the status of women in academic anesthesiology. The objective of this study is to provide a current update on the status of women in academic anesthesiology. We hypothesized that while the number of women in academic anesthesiology has increased in the past 10 years, major gender disparities continue to persist, most notably in leadership roles. METHODS: Medical student, resident, and faculty data were obtained from the Association of American Medical Colleges. The number of women in anesthesiology at the resident and faculty level, the distribution of faculty academic rank, and the number of women chairpersons were compared across the period from 2006 to 2016. The gender distribution of major anesthesiology journal editorial boards and data on anesthesiology research grant awards, among other leadership roles, were collected from websites and compared to data from 2005 and 2006. RESULTS: The number (%) of women anesthesiology residents/faculty has increased from 1570 (32%)/1783 (29%) in 2006 to 2145 (35%)/2945 (36%) in 2016 (P = .004 and P < .001, respectively). Since 2006, the odds that an anesthesiology faculty member was a woman increased approximately 2% per year, with an estimated odds ratio of 1.02 (95% confidence interval, 1.014–1.025; P < .001). In 2015, the percentage of women anesthesiology full professors (7.4%) was less than men full professors (17.3%) (difference, −9.9%; 95% confidence interval of the difference, −8.5% to −11.3%; P < .001). The percentage of women anesthesiology department chairs remained unchanged from 2006 to 2016 (12.7% vs 14.0%) (P = .75). To date, neither Anesthesia & Analgesia nor Anesthesiology has had a woman Editor-in-Chief. The percentage of major research grant awards to women has increased significantly from 21.1% in 1997–2007 to 31.5% in 2007–2016 (P = .02). CONCLUSIONS: Gender disparities continue to exist at the upper levels of leadership in academic anesthesiology, most importantly in the roles of full professor, department chair, and journal editors. However, there are some indications that women may be on the path to leadership parity, most notably, the growth of women in anesthesiology residencies and faculty positions and increases in major research grants awarded to women. Copyright © 2018 International Anesthesia Research Society
Preclinical Proficiency-Based Model of Ultrasound Training
BACKGROUND: Graduate medical education is being transformed from a time-based training model to a competency-based training model. While the application of ultrasound in the perioperative arena has become an expected skill set for anesthesiologists, clinical exposure during training is intermittent and nongraduated without a structured program. We developed a formal structured perioperative ultrasound program to efficiently train first-year clinical anesthesia (CA-1) residents and evaluated its effectiveness quantitatively in the form of a proficiency index. METHODS: In this prospective study, a multimodal perioperative ultrasound training program spread over 3 months was designed by experts at an accredited anesthesiology residency program to train the CA-1 residents. The training model was based on self-learning through web-based modules and instructor-based learning by performing perioperative ultrasound techniques on simulators and live models. The effectiveness of the program was evaluated by comparing the CA-1 residents who completed the training to graduating third-year clinical anesthesia (CA-3) residents who underwent the traditional ultrasound training in the residency program using a designed index called a "proficiency index." The proficiency index was composed of scores on a cognitive knowledge test (20%) and scores on an objective structured clinical examination (OSCE) to evaluate the workflow understanding (40%) and psychomotor skills (40%). RESULTS: Sixteen CA-1 residents successfully completed the perioperative ultrasound training program and the subsequent evaluation with the proficiency index. The total duration of training was 60 hours of self-based learning and instructor-based learning. There was a significant improvement observed in the cognitive knowledge test scores for the CA-1 residents after the training program (pretest: 71% [0.141 ± 0.019]; posttest: 83% [0.165 ± 0.041]; P <.001). At the end of the program, the CA-1 residents achieved an average proficiency index that was not significantly different from the average proficiency index of graduating CA-3 residents who underwent traditional ultrasound training (CA-1: 0.803 ± 0.049; CA-3: 0.823 ± 0.063, P =.307). CONCLUSIONS: Our results suggest that the implementation of a formal, structured curriculum allows CA-1 residents to achieve a level of proficiency in perioperative ultrasound applications before clinical exposure. © 2022 Lippincott Williams and Wilkins. All rights reserved.
Transmission Dynamics of Gram-Negative Bacterial Pathogens in the Anesthesia Work Area
BACKGROUND: Gram-negative organisms are a major health care concern with increasing prevalence of infection and community spread. Our primary aim was to characterize the transmission dynamics of frequently encountered gram-negative bacteria in the anesthesia work area environment (AWE). Our secondary aim was to examine links between these transmission events and 30-day postoperative health care-associated infections (HCAIs). METHODS: Gram-negative isolates obtained from the AWE (patient nasopharynx and axilla, anesthesia provider hands, and the adjustable pressure-limiting valve and agent dial of the anesthesia machine) at 3 major academic medical centers were identified as possible intraoperative bacterial transmission events by class of pathogen, temporal association, and phenotypic analysis (analytical profile indexing). The top 5 frequently encountered genera were subjected to antibiotic disk diffusion sensitivity to identify epidemiologically related transmission events. Complete multivariable logistic regression analysis and binomial tests of proportion were then used to examine the relative contributions of reservoirs of origin and within- and between-case modes of transmission, respectively, to epidemiologically related transmission events. Analyses were conducted with and without the inclusion of duplicate transmission events of the same genera occurring in a given study unit (first and second case of the day in each operating room observed) to examine the potential effect of statistical dependency. Transmitted isolates were compared by pulsed-field gel electrophoresis to disease-causing bacteria for 30-day postoperative HCAIs. RESULTS: The top 5 frequently encountered gram-negative genera included Acinetobacter, Pseudomonas, Brevundimonas, Enterobacter, and Moraxella that together accounted for 81% (767/945) of possible transmission events. For all isolates, 22% (167/767) of possible transmission events were identified by antibiotic susceptibility patterns as epidemiologically related and underwent further study of transmission dynamics. There were 20 duplicates involving within- and between-case transmission events. Thus, approximately 19% (147/767) of isolates excluding duplicates were considered epidemiologically related. Contaminated provider hand reservoirs were less likely (all isolates, odds ratio 0.12, 95% confidence interval 0.03-0.50, P = 0.004; without duplicate events, odds ratio 0.05, 95% confidence interval 0.01-0.49, P = 0.010) than contaminated patient or environmental sites to serve as the reservoir of origin for epidemiologically related transmission events. Within- and between-case modes of gram-negative bacilli transmission occurred at similar rates (all isolates, 7% between-case, 5.2% within-case, binomial P value 0.176; without duplicates, 6.3% between-case, 3.7% within-case, binomial P value 0.036). Overall, 4.0% (23/548) of patients suffered from HCAIs and had an intraoperative exposure to gram-negative isolates. In 8.0% (2/23) of those patients, gram-negative bacteria were linked by pulsed-field gel electrophoresis to the causative organism of infection. Patient and provider hands were identified as the reservoirs of origin and the environment confirmed as a vehicle for between-case transmission events linked to HCAIs. CONCLUSIONS: Between- and within-case AWE gram-negative bacterial transmission occurs frequently and is linked by pulsed-field gel electrophoresis to 30-day postoperative infections. Provider hands are less likely than contaminated environmental or patient skin surfaces to serve as the reservoir of origin for transmission events. © 2015 International Anesthesia Research Society.
Monitoring with head-mounted displays: Performance and safety in a full-scale simulator and part-task trainer
BACKGROUND: Head-mounted displays (HMDs) can help anesthesiologists with intraoperative monitoring by keeping patients' vital signs within view at all times, even while the anesthesiologist is busy performing procedures or unable to see the monitor. The anesthesia literature suggests that there are advantages of HMD use, but research into head-up displays in the cockpit suggests that HMDs may exacerbate inattentional blindness (a tendency for users to miss unexpected but salient events in the field of view) and may introduce perceptual issues relating to focal depth. We investigated these issues in two simulator-based experiments. METHODS: Experiment 1 investigated whether wearing a HMD would affect how quickly anesthesiologists detect events, and whether the focus setting of the HMD (near or far) makes any difference. Twelve anesthesiologists provided anesthesia in three naturalistic scenarios within a simulated operating theater environment. There were 24 different events that occurred either on the patient monitor or in the operating room. Experiment 2 investigated whether anesthesiologists physically constrained by performing a procedure would detect patient-related events faster with a HMD than without. Twelve anesthesiologists performed a complex simulated clinical task on a part-task endoscopic dexterity trainer while monitoring the simulated patient's vital signs. All participants experienced four different events within each of two scenarios. RESULTS: Experiment 1 showed that neither wearing the HMD nor adjusting the focus setting reduced participants' ability to detect events (the number of events detected and time to detect events). In general, participants spent more time looking toward the patient and less time toward the anesthesia machine when they wore the HMD than when they used standard monitoring alone. Participants reported that they preferred the near focus setting. Experiment 2 showed that participants detected two of four events faster with the HMD, but one event more slowly with the HMD. Participants turned to look toward the anesthesia machine significantly less often when using the HMD. When using the HMD, participants reported that they were less busy, monitoring was easier, and they believed they were faster at detecting abnormal changes. CONCLUSIONS: The HMD helped anesthesiologists detect events when physically constrained, but not when physically unconstrained. Although there was no conclusive evidence of worsened inattentional blindness, found in aviation, the perceptual properties of the HMD display appear to influence whether events are detected. Anesthesiologists wearing HMDs should self-adjust the focus to minimize eyestrain and should be aware that some changes may not attract their attention. Future areas of research include developing principles for the design of HMDs, evaluating other types of HMDs, and evaluating the HMD in clinical contexts. © 2009 by International Anesthesia Research Society.
Staffing With Disease-Based Epidemiologic Indices May Reduce Shortage of Intensive Care Unit Staff During the COVID-19 Pandemic
BACKGROUND: Health care worker (HCW) safety is of pivotal importance during a pandemic such as coronavirus disease 2019 (COVID-19), and employee health and well-being ensure functionality of health care institutions. This is particularly true for an intensive care unit (ICU), where highly specialized staff cannot be readily replaced. In the light of lacking evidence for optimal staffing models in a pandemic, we hypothesized that staff shortage can be reduced when staff scheduling takes the epidemiology of a disease into account. METHODS: Various staffing models were constructed, and comprehensive statistical modeling was performed. A typical routine staffing model was defined that assumed full-time employment (40 h/wk) in a 40-bed ICU with a 2:1 patient-to-staff ratio. A pandemic model assumed that staff worked 12-hour shifts for 7 days every other week. Potential in-hospital staff infections were simulated for a total period of 120 days, with a probability of 10%, 25%, and 40% being infected per week when at work. Simulations included the probability of infection at work for a given week, of fatality after infection, and the quarantine time, if infected. RESULTS: Pandemic-adjusted staffing significantly reduced workforce shortage, and the effect progressively increased as the probability of infection increased. Maximum effects were observed at week 4 for each infection probability with a 17%, 32%, and 38% staffing reduction for an infection probability of 0.10, 0.25, and 0.40, respectively. CONCLUSIONS: Staffing along epidemiologic considerations may reduce HCW shortage by leveling the nadir of affected workforce. Although this requires considerable efforts and commitment of staff, it may be essential in an effort to best maintain staff health and operational functionality of health care facilities and systems. © 2020 Lippincott Williams and Wilkins. All rights reserved.
Hand Hygiene Knowledge and Perceptions among Anesthesia Providers
BACKGROUND: Health care worker compliance with hand hygiene guidelines is an important measure for health care-associated infection prevention, yet overall compliance across all health care arenas remains low. A correct answer to 4 of 4 structured questions pertaining to indications for hand decontamination (according to types of contact) has been associated with improved health care provider hand hygiene compliance when compared to those health care providers answering incorrectly for 1 or more questions. A better understanding of knowledge deficits among anesthesia providers may lead to hand hygiene improvement strategies. In this study, our primary aims were to characterize and identify predictors for hand hygiene knowledge deficits among anesthesia providers. METHODS: We modified this previously tested survey instrument to measure anesthesia provider hand hygiene knowledge regarding the 5 moments of hand hygiene across national and multicenter groups. Complete knowledge was defined by correct answers to 5 questions addressing the 5 moments for hand hygiene and received a score of 1. Incomplete knowledge was defined by an incorrect answer to 1 or more of the 5 questions and received a score of 0. We used a multilevel random-effects XTMELOGIT logistic model clustering at the respondent and geographic location for insufficient knowledge and forward/backward stepwise logistic regression analysis to identify predictors for incomplete knowledge. RESULTS: The survey response rates were 55.8% and 18.2% for the multicenter and national survey study groups, respectively. One or more knowledge deficits occurred with 81.6% of survey respondents, with the mean number of correct answers 2.89 (95% confidence interval, 2.78- 2.99). Failure of providers to recognize prior contact with the environment and prior contact with the patient as hand hygiene opportunities contributed to the low mean. Several cognitive factors were associated with a reduced risk of incomplete knowledge including providers responding positively to washing their hands after contact with the environment (odds ratio [OR] 0.23, 0.14-0.37, P < 0.001), disinfecting their environment during patient care (OR 0.54, 0.35-0.82, P = 0.004), believing that they can influence their colleagues (OR 0.43, 0.27-0.68, P < 0.001), and intending to adhere to guidelines (OR 0.56, 0.36-0.86, P = 0.008). These covariates were associated with an area under receiver operator characteristics curve of 0.79 (95% confidence interval, 0.74-0.83). CONCLUSIONS: Anesthesia provider knowledge deficits around to hand hygiene guidelines occur frequently and are often due to failure to recognize opportunities for hand hygiene after prior contact with contaminated patient and environmental reservoirs. Intraoperative hand hygiene improvement programs should address these knowledge deficits. Predictors for incomplete knowledge as identified in this study should be validated in future studies. © 2015 International Anesthesia Research Society.
Health Numeracy and Relative Risk Comprehension in Perioperative Patients and Physicians
BACKGROUND: Helping patients to understand relative risks is challenging. In discussions with patients, physicians often use numbers to describe hazards, make comparisons, and establish relevance. Patients with a poor understanding of numbers-poor "health numeracy"-also have difficulty making decisions and coping with chronic conditions. Although the importance of "health literacy"in perioperative populations is recognized, health numeracy has not been well studied. Our aim was to compare understanding of numbers, risk, and risk modification between a patient population awaiting surgery under general anesthesia and attending physicians at the same center. METHODS: We performed a single-center cross-sectional survey study to compare patients' and physicians' health numeracy. The study instrument was based on the Schwartz-Lipkus survey and included 3 simple health numeracy questions and 2 risk reduction questions in the anesthesiology domain. The survey was mailed to patients over the age of 18 scheduled for elective surgery under general anesthesia between June and September 2019, as well as attending physicians at the study center. RESULTS: Two hundred thirteen of 502 (42%) patient surveys sent and 268 of 506 (53%) physician surveys sent were returned. Median patient score was 4 of 5, but 32% had a score of ≤3. Patients significantly overestimated their total scores by an average of 0.5 points (estimated [mean ± standard deviation (SD)] = 4.3 ± 1.2 vs actual 3.8 ± 1.3; P <.001). Health numeracy was significantly associated with higher educational level (gamma = 0.351; P <.001) and higher-income level (gamma = 0.397; P <.001). Physicians' health numeracy was significantly higher than the patients' (median [interquartile range {IQR}] = 5 [4-5] vs 4 [3-5]; P <.001). There was no significant difference between physicians' self-estimated and actual total numeracy score (mean ± SD = 4.8 ± 0.6 vs 4.7 ± 0.6; P =.372). Simple health numeracy (questions 1-3) was predictive of correct risk reduction responses (questions 4, 5) for both patients (gamma = 0.586; P <.001) and physicians (gamma = 0.558; P =.006). CONCLUSIONS: Patients had poor health numeracy compared to physicians and tended to overrate their abilities. A small proportion of physicians also had poor numeracy. Poor health numeracy was associated with incomprehension of risk modification, suggesting that some patients may not understand treatment efficacy. These disparities suggest a need for further inquiry into how to improve patient comprehension of risk modification. Copyright © 2020 International Anesthesia Research Society.
Using Machine Learning to Evaluate Attending Feedback on Resident Performance
BACKGROUND: High-quality and high-utility feedback allows for the development of improvement plans for trainees. The current manual assessment of the quality of this feedback is time consuming and subjective. We propose the use of machine learning to rapidly distinguish the quality of attending feedback on resident performance. METHODS: Using a preexisting databank of 1925 manually reviewed feedback comments from 4 anesthesiology residency programs, we trained machine learning models to predict whether comments contained 6 predefined feedback traits (actionable, behavior focused, detailed, negative feedback, professionalism/communication, and specific) and predict the utility score of the comment on a scale of 1-5. Comments with ≥4 feedback traits were classified as high-quality and comments with ≥4 utility scores were classified as high-utility; otherwise comments were considered low-quality or low-utility, respectively. We used RapidMiner Studio (RapidMiner, Inc, Boston, MA), a data science platform, to train, validate, and score performance of models. RESULTS: Models for predicting the presence of feedback traits had accuracies of 74.4%-82.2%. Predictions on utility category were 82.1% accurate, with 89.2% sensitivity, and 89.8% class precision for low-utility predictions. Predictions on quality category were 78.5% accurate, with 86.1% sensitivity, and 85.0% class precision for low-quality predictions. Fifteen to 20 hours were spent by a research assistant with no prior experience in machine learning to become familiar with software, create models, and review performance on predictions made. The program read data, applied models, and generated predictions within minutes. In contrast, a recent manual feedback scoring effort by an author took 15 hours to manually collate and score 200 comments during the course of 2 weeks. CONCLUSIONS: Harnessing the potential of machine learning allows for rapid assessment of attending feedback on resident performance. Using predictive models to rapidly screen for low-quality and low-utility feedback can aid programs in improving feedback provision, both globally and by individual faculty. © 2020 International Anesthesia Research Society.
Association between anesthesiology volumes and early and late outcomes after cystectomy for bladder cancer: A population-based study
BACKGROUND: Hospital and surgeon volume are related to postoperative complications and long-term survival after radical cystectomy. Here, we describe the relationships between these provider characteristics and anesthesiologist volumes on early and late outcomes after radical cystectomy for bladder cancer. METHODS: Records of treatment and surgical pathology reports were linked to the population-based Ontario Cancer Registry to identify all patients with radical cystectomy in Ontario during 1994 to 2008. Volume was divided into quartiles and determined on the basis of mean annual number of hospital/surgeon/anesthesiologist radical cystectomy cases during a 5-year study period. A composite anesthesiologist volume also was used and defined as major colorectal procedures in addition to radical cystectomy given the similar complexity of these cases. Logistic and Cox proportional hazards regression models were used to explore the associations between volume and outcomes while adjusting for potential patient-, disease-, and system-related confounders. The primary outcomes were postoperative readmission rates, postoperative mortality, and 5-year survival. RESULTS: The study included 3585 patients with radical cystectomy between 1994 and 2008. Median annual anesthesiologist radical cystectomy volume was 1 (maximum 8.8 cases/year); lowest volume quartile (Q1) <0.6 cases/year and highest volume quartile (Q4) >1.4 cases/year. The median annual composite anesthesiologist volume was 9 radical cystectomy and colorectal cases (Q1 [range 0.2-6.4 cases/year], Q4 [range 11.8-29.2 cases/year]); subsequent analyses used this composite volume. Anesthesiologist volume was associated with readmission rates at 30 days (P =.02, Q1 mean = 27% vs Q4 mean = 21%) and at 90 days (P =.01, Q1 mean = 39% vs Q4 mean = 31%). In multivariable analysis, including the adjustment for surgeon and hospital volume, the cohort of anesthesiologists who performed the lowest volume of cases annually (Q1) was associated with greater rates of readmission at 30 days (OR 1.36, 95% confidence interval [CI], 1.09-1.71, P =.04) and at 90 days (OR 1.36, 95% CI, 1.11-1.66, P =.03). Anesthesiologist volumes were not associated with postoperative mortality or long-term survival. CONCLUSIONS: Anesthesiologist case volume for radical cystectomy was low, reflecting the lack of subspecialization in urologic procedures in routine clinical practice. Lower volume anesthesia providers were associated with higher readmission rates after radical cystectomy. Further studies are needed to validate this finding and to identify the processes that may explain an association between provider volume and patient outcome. © 2017 International Anesthesia Research Society.
Implications of resolved hypoxemia on the utility of desaturation alerts sent from an anesthesia decision support system to supervising anesthesiologists
Background: Hypoxemia (oxygen saturation &lt;90%) lasting 2 or more minutes occurs in 6.8% of adult patients undergoing noncardiac anesthesia in operating room settings. Alarm management functionality can be added to decision support systems (DSS) to send text alerts about vital signs outside specified thresholds, using data in anesthesia information management systems. We considered enhancing our DSS to send hypoxemia alerts to the text pagers of supervising anesthesiologists. As part of a voluntary application for an investigative device exemption from our IRB to implement such functionality, we evaluated the maximum potential utility of such an alert system. Methods: Pulse oximetry values (SpO2) were extracted from our anesthesia information management systems for all cases performed in our main operating rooms and ambulatory surgical center between September 1, 2011, and February 4, 2012 (n = 16,870). Hypoxemic episodes (SpO2 &lt; 90%) were characterized as either (a) lasting one or more minutes or (b) lasting 2 or more minutes. A single simulated "alert" was modeled as having been sent at the timestamp of the first (a) or the second (b) hypoxemic value. The hypoxemic episode was considered resolved at 1, 3, or 5 minutes after the time of the alert if the SpO2 value was no longer below the 90% threshold. Two-sided 99% conservative confidence limits were calculated for the percentage of unresolved alerts at the 3 evaluation intervals and compared with 70%, the lower limit of an acceptable true alarm rate for clinical utility. Results: There was at least 1 hypoxemic episode lasting 1 minute or longer in 23% of cases, and at least 1 episode lasting 2 minutes or longer in 8% of cases. Only 7% (99% confidence interval [CI] 6% to 8%) of the 1-minute hypoxemic episodes were unresolved after 3 minutes, and only 8% (99% CI 6%to 9%) of 2-minute episodes after 5 minutes (both P &lt; 10-6 in comparison with 70% minimum reliability rate). Conclusions: Low utility should be expected for a DSS sending hypoxemia alerts to supervising anesthesiologists, because nearly all hypoxemic episodes will have been resolved before arrival of the anesthesiologist in the operating room. These results suggest that the principal research focus should be on developing more sophisticated alerts and processes within rooms for the anesthesia care provider to initiate treatment promptly, to interpret or correct artifacts, and to make it easier to call for assistance via a rapid communication system. Copyright © 2012 International Anesthesia Research Society.
Anesthesiologist staffing considerations consequent to the temporal distribution of hypoxemic episodes in the Postanesthesia care unit
BACKGROUND: Hypoxemia, as measured by pulse oximetry (Spo2), is common in postanesthesia care unit (PACU) patients. The temporal distribution of desaturation has managerial implications because treatment may necessitate the presence of an anesthesiologist.METHODS: We retrieved Spo2 values recorded electronically every 30 to 60 seconds from 137,757 PACU patients over n = 80 four-week periods at an academic medical center. Batch mean methods of analysis were used. Onset times of hypoxemic episodes (defined, on the basis of previous studies, as Spo2 &lt;90% lasting at least 2 minutes) were determined and resolution at 3, 5, and 10 minutes was assessed. Episodes beginning &lt;30 minutes and ≥30 minutes after PACU admission were compared. Patients undergoing intubation in the PACU were identified by doing a free text search of electronically recorded nursing notes for phrases suggesting intubation, followed by a confirmatory manual chart review. Intervals from PACU admission to intubation were determined.RESULTS: Fewer than half (31.2% ± 0.05%) of episodes of PACU hypoxemia lasting ≥2 minutesoccurred <30 minutes after PACU admission. Most (i.e., >50%) occurred ≥30 minutes after admission (P < 0.0001). Few (<1%) anesthesia providers transporting patients to the PACU were still present in the PACU 30 minutes after arrival in the PACU. Fewer than half (37%; 95% confidence interval, 27.4% to 48.8%) of PACU intubations occurred <30 minutes after PACU admission. Most (i.e., >50%) occurred ≥30 minutes after admission (P = 0.029). Hypoxemic episodes in the PACU resolved more slowly than episodes in operating rooms (P < 0.0001). After 3 minutes, 40.9% ± 0.6% were unresolved in the PACU versus 23% (99% upper confidence limit) in operating rooms, and 32.6% ± 0.5% vs 9% (99% upper confidence limit) after 5 minutes.CONCLUSIONS: Because most (68.8%) hypoxemic episodes in the PACU occur ≥30 minutes after admission, a time by which the anesthesia provider who transported the patient usually would no longer be present (>99% of cases), the PACU needs to be considered when anesthesiologist operating room staffing and assignment decisions are made. Copyright © 2014 International Anesthesia Research Society.
The role of the anesthesiologist in fast-track surgery: From multimodal analgesia to perioperative medical care
BACKGROUND: Improving perioperative efficiency and throughput has become increasingly important in the modern practice of anesthesiology. Fast-track surgery represents a multidisciplinary approach to improving perioperative efficiency by facilitating recovery after both minor (i.e., outpatient) and major (inpatient) surgery procedures. In this article we focus on the expanding role of the anesthesiologist in fast-track surgery. METHODS: A multidisciplinary group of clinical investigators met at McGill University in the Fall of 2005 to discuss current anesthetic and surgical practices directed at improving the postoperative recovery process. A subgroup of the attendees at this conference was assigned the task of reviewing the peer-reviewed literature on this topic as it related to the role of the anesthesiologist as a perioperative physician. RESULTS: Anesthesiologists as perioperative physicians play a key role in fast-track surgery through their choice of preoperative medication, anesthetics and techniques, use of prophylactic drugs to minimize side effects (e.g., pain, nausea and vomiting, dizziness), as well as the administration of adjunctive drugs to maintain major organ system function during and after surgery. CONCLUSION: The decisions of the anesthesiologist as a key perioperative physician are of critical importance to the surgical care team in developing a successful fast-track surgery program. © 2007 by International Anesthesia Research Society.
Progressive Increase in Scholarly Productivity of New American Board of Anesthesiology Diplomates From 2006 to 2016: A Bibliometric Analysis
BACKGROUND: Improving research productivity is a common goal in academic anesthesiology. Initiatives to enhance scholarly productivity in anesthesiology were proposed more than a decade ago as a result of emphasis on clinical work. We hypothesized that American Board of Anesthesiology diplomates certified from 2006 to 2016 would be progressively more likely to have published at least once during this time period. METHODS: A complete list of 17,332 new diplomates was obtained from the American Board of Anesthesiology for the years 2006 to 2016. These names were queried using PubMed, and the number of publications up to and including the diplomate’s year of primary certification was recorded. Descriptive statistics and logistic regression analysis were used to analyze the association of the year of primary certification and whether a diplomate had published at least once. RESULTS: The percentage of American Board of Anesthesiology diplomates with ≥1 publication at the time of primary certification increased from 14.9% to 29.3% from 2006 to 2016. The mean number of publications per diplomate more than doubled from 0.31 to 0.79. Logistic regression analysis revealed the year of primary certification as significantly associated with having ≥1 publication (P < .001). Using 2006 as the reference year, odds of having published at least once were higher in the years 2010 to 2016, with the highest odds ratio of having a article published occurring in 2016: 2.359 (confidence interval, 1.978–2.812; P < .001). CONCLUSIONS: Publications by new diplomates of the American Board of Anesthesiology have increased between 2006 and 2016. Whether the observed increase in publications could reflect efforts to stimulate interest in academic objectives during training remains to be proven. Copyright © 2018 International Anesthesia Research Society
Academic anesthesiology career development: A mixed-methods evaluation of the role of foundation for anesthesiology education and research funding
BACKGROUND: In 1986, the American Society of Anesthesiologists created the Foundation for Anesthesiology Education and Research (FAER) to fund young anesthesiology investigators toward the goal of helping launch their academic careers. Determining the impact of the FAER grant program has been of importance. METHODS: This mixed-methods study included quantitative data collection through a Research Electronic Data Capture survey and curriculum vitae (CV) submission and qualitative interviews. CVs were abstracted for education history, faculty appointment(s), first and last author peer-reviewed publications, grant funding, and leadership positions. Survey nonrespondents were sent up to 3 reminders. Interview questions elicited details about the experience of submitting a FAER grant. Quantitative data were summarized descriptively, and qualitative data were analyzed with NVivo. RESULTS: Of 830 eligible participants, 38.3% (N = 318) completed surveys, 170 submitted CVs, and 21 participated in interviews. Roughly 85% held an academic appointment. Funded applicants were more likely than unfunded applicants to apply for National Institutes of Health funding (60% vs 35%, respectively; P < .01), but the probability of successfully receiving an National Institutes of Health grant did not differ (83% vs 85%, respectively; P = .82). The peer-reviewed publication rate (publications per year since attending medical school) did not differ between funded and unfunded applicants, with an estimated difference in means (95% confidence interval) of 1.3 (–0.3 to 2.9) publications per year. The primary FAER grant mentor for over one-third of interview participants was a nonanesthesiologist. Interview participants commonly discussed the value of having multiple mentors. Key mentor attributes mentioned were availability, guidance, reputation, and history of success. CONCLUSIONS: This cross-sectional data demonstrated career success in publications, grants, and leadership positions for faculty who apply for a FAER grant. A FAER grant application may be a marker for an anesthesiologist who is interested in pursuing a physician-scientist career. Copyright © 2018 International Anesthesia Research Society
Advanced auditory displays and head-mounted displays: Advantages and disadvantages for monitoring by the distracted anesthesiologist
BACKGROUND: In a full-scale anesthesia simulator study we examined the relative effectiveness of advanced auditory displays for respiratory and blood pressure monitoring and of head-mounted displays (HMDs) as supplements to standard intraoperative monitoring. METHODS: Participants were 16 residents and attendings. While performing a reading-based distractor task, participants supervised the activities of a resident (an actor) who they were told was junior to them. If participants detected an event that could eventually harm the simulated patient, they told the resident, pressed a button on the computer screen, and/or informed a nearby experimenter. Participants completed four 22-min anesthesia scenarios. Displays were presented in a counterbalanced order that varied across participants and included: (1) Visual (visual monitor with variable-tone pulse oximetry), (2) HMD (Visual plus HMD), (3) Audio (Visual plus auditory displays for respiratory rate, tidal volume, end-tidal CO2, and noninvasive arterial blood pressure), and (4) Both (Visual plus HMD plus Audio). RESULTS: Participants detected significantly more events with Audio (mean = 90%, median = 100%, P < 0.02) and Both (mean = 92%, median = 100%, P < 0.05) but not with HMD (mean = 75%, median = 67%, ns) compared with the Visual condition (mean = 52%, median = 50%). For events detected, there was no difference in detection times across display conditions. Participants self-rated monitoring as easier in the HMD, Audio and Both conditions and their responding as faster in the HMD and Both conditions than in the Visual condition. CONCLUSIONS: Advanced auditory displays help the distracted anesthesiologist maintain peripheral awareness of a simulated patient's status, whereas a HMD does not significantly improve performance. Further studies should test these findings in other intraoperative contexts. © 2008 International Anesthesia Research Society.
Validation of the Lusaka Formula: A Novel Formula for Weight Estimation in Children Presenting for Surgery in Zambia
BACKGROUND: In children, the use of actual weight or predicted weight from various estimation methods is essential to reduce harm associated with dosing errors. This study aimed to validate the new locally derived Lusaka formula on an independent cohort of children undergoing surgery at the University Teaching Hospital in Lusaka, Zambia, to compare the Lusaka formula's performance to commonly used weight prediction tools and to assess the nutritional status of this population. METHODS: The Lusaka formula (weight = [age in months/2] + 3.5 if under 1 year; weight = 2×[age in years] + 7 if older than 1 year) was derived from a previously published data set. We aimed to validate this formula in a new data set. Weights, heights, and ages of 330 children up to 14 years were measured before surgery. Accuracy was examined by comparing the (1) mean percentage error and (2) the percentage of actual weights that fell between 10% and 20% of the estimated weight for the Lusaka formula, and for other existing tools. World Health Organization (WHO) growth charts, mid upper arm circumference (MUAC), and body mass index (BMI) were used to assess nutritional status. RESULTS: The Lusaka formula had similar precision to the Broselow tape: 160 (48.5%) vs 158 (51.6%) children were within 10% of the estimated weight, 241 (73.0%) vs 245 (79.5%) children were within 20% of the estimated weight. The Lusaka formula slightly underestimated weight (mean bias, -0.5 kg) in contrast to all other predictive tools, which overestimated on average. Twenty-two percent of children had moderate or severe chronic malnutrition (stunting) and 4.7% of children had moderate or severe acute malnutrition (wasting). CONCLUSIONS: The Lusaka formula is comparable to, or better than, other age-based weight prediction tools in children presenting for surgery at the University Teaching Hospital in Lusaka, Zambia, and has the advantage that it covers a wider age range than tools with comparable accuracy. In this population, commonly used aged-based prediction tools significantly overestimate weights. © 2022 Lippincott Williams and Wilkins. All rights reserved.
An Analysis of Substandard Propofol Detected in Use in Zambian Anesthesia
BACKGROUND: In early 2015, clinicians throughout Zambia noted a range of unpredictable adverse events after the administration of propofol, including urticaria, bronchospasm, profound hypotension, and most predictably an inadequate depth of anesthesia. Suspecting that the propofol itself may have been substandard, samples were procured and sent for testing. METHODS: Three vials from 2 different batches were analyzed using gas chromatography-mass spectrometry methods at the John L. Holmes Mass Spectrometry Facility. RESULTS: Laboratory gas chromatography-mass spectrometry analysis determined that, although all vials contained propofol, its concentration differed between samples and in all cases was well below the stated quantity. Two vials from 1 batch contained only 44% ± 11% and 54% ± 12% of the stated quantity, whereas the third vial from a second batch contained only 57% ± 9%. The analysis found that there were no hexane-soluble impurities in the samples. CONCLUSIONS: None of the analyzed vials contained the stated amount of propofol; however, our analysis did not detect additional contaminants that would explain the adverse events reported by clinicians. Our results confirm the presence of substandard propofol in Zambia; however, anecdotal accounts of substandard anesthetic medicines in other countries abound and warrant further investigation to provide estimates of the prevalence and scope of this global problem. © Copyright 2017 International Anesthesia Research Society.
Association between Participation and Performance in MOCA Minute and Actions Against the Medical Licenses of Anesthesiologists
BACKGROUND: In January 2016, as part of the Maintenance of Certification in Anesthesiology (MOCA) program, the American Board of Anesthesiology launched MOCA Minute, a web-based longitudinal assessment, to supplant the former cognitive examination. We investigated the association between participation and performance in MOCA Minute and disciplinary actions against medical licenses of anesthesiologists. METHODS: All anesthesiologists with time-limited certificates (ie, certified in 2000 or after) who were required to register for MOCA Minute in 2016 were followed up through December 31, 2016. The incidence of postcertification prejudicial license actions was compared between those who did and did not register and compared between registrants who did and did not meet the MOCA Minute performance standard. RESULTS: The cumulative incidence of license actions was 1.2% (245/20,006) in anesthesiologists required to register for MOCA Minute. Nonregistration was associated with a higher incidence of license actions (hazard ratio, 2.93 [95% confidence interval {CI}, 2.15-4.00]). For the 18,534 (92.6%) who registered, later registration (after June 30, 2016) was associated with a higher incidence of license actions. In 2016, 16,308 (88.0%) anesthesiologists met the MOCA Minute performance standard. Of those not meeting the standard (n = 2226), most (n = 2093, 94.0%) failed because they did not complete the required 120 questions. Not meeting the standard was associated with a higher incidence of license actions (hazard ratio, 1.92 [95% CI, 1.36-2.72]). CONCLUSIONS: Both timely participation and meeting performance standard in MOCA Minute are associated with a lower likelihood of being disciplined by a state medical board. © 2019 International Anesthesia Research Society.
Work Habits Are Valid Components of Evaluations of Anesthesia Residents Based on Faculty Anesthesiologists' Daily Written Comments about Residents
Background: In our department, faculty anesthesiologists routinely evaluate the resident physicians with whom they worked in an operative setting the day before, providing numerical scores to questions. The faculty can also enter a written comment if so desired. Because residents' work habits are important to anesthesiology program directors, and work habits can improve with feedback, we hypothesized that faculty comments would include the theme of the anesthesia resident's work habits. Methods: We analyzed all 6692 faculty comments from January 1, 2011, to June 30, 2015. We quantified use of the theme of Dannefer et al.'s work habit scale, specifically the words and phrases in the scale, and synonyms to the words. Results: Approximately half (50.7% [lower 99.99% confidence limit, 48.4%]) of faculty comments contained the theme of work habits. Multiple sensitivity analyses were performed excluding individual faculty, residents, and words. The lower confidence limits for comments containing the theme were each >42.7%. Conclusions: Although faculty anesthesiologists completed (numerical) questions based on the American College of Graduate Medical Education competencies to evaluate residents, an important percentage of written comments included the theme of work habits. The implication is that the theme has validity as one component of the routine evaluation of anesthesia residents. © 2016 International Anesthesia Research Society.
Characteristics of emergency pages using a computer-based anesthesiology paging system in children and adults undergoing procedures at a tertiary care medical center
BACKGROUND: In our large academic supervisory practice, attending anesthesiologists concomitantly care for multiple patients. To manage communications within the procedural environment, we use a proprietary electronic computer-based anesthesiology visual paging system. This system can send an emergency page that instantly alerts the attending anesthesiologist and other available personnel that immediate help is needed. We analyzed the characteristics of intraoperative emergency pages in children and adults. METHODS: We identified all emergency page activations between January 1, 2005 and July 31, 2010 in our main operating rooms. Electronic medical records were reviewed for rates and characteristics of pages such as primary etiology, performed interventions, and outcomes. RESULTS: During the study period, 258,135 anesthetics were performed (n = 32,103 children, younger than 18 years) and 370 emergency pages (n = 309 adults, n = 61 children) were recorded (1.4 per 1000 cases; 95% confidence interval, 1.3-1.6). Infants had the highest rates (9.4 per 1000; 95% confidence interval, 5.7-14.4) of emergency page activations (P < 0.001 compared with each other age group). In adults, the most frequent causes were hemodynamic (55%), and in children respiratory and airway (60.7%) events. CONCLUSION: Emergency pages were rare in patients older than 2 years. Infants were more likely than children 1 to 2 years of age to have emergency page activation, despite both groups being cared for by pediatric fellowship trained anesthesiologists. Copyright © 2013 international anesthesia Research society.
Anesthesiologists' Overconfidence in Their Perceived Knowledge of Neuromuscular Monitoring and Its Relevance to All Aspects of Medical Practice: An International Survey
BACKGROUND: In patients who receive a nondepolarizing neuromuscular blocking drug (NMBD) during anesthesia, undetected postoperative residual neuromuscular block is a common occurrence that carries a risk of potentially serious adverse events, particularly postoperative pulmonary complications. There is abundant evidence that residual block can be prevented when real-time (quantitative) neuromuscular monitoring with measurement of the train-of-four ratio is used to guide NMBD administration and reversal. Nevertheless, a significant percentage of anesthesiologists fail to use quantitative devices or even conventional peripheral nerve stimulators routinely. Our hypothesis was that a contributing factor to the nonutilization of neuromuscular monitoring was anesthesiologists' overconfidence in their knowledge and ability to manage the use of NMBDs without such guidance. METHODS: We conducted an Internet-based multilingual survey among anesthesiologists worldwide. We asked respondents to answer 9 true/false questions related to the use of neuromuscular blocking drugs. Participants were also asked to rate their confidence in the accuracy of each of their answers on a scale of 50% (pure guess) to 100% (certain of answer). RESULTS: Two thousand five hundred sixty persons accessed the website; of these, 1629 anesthesiologists from 80 countries completed the 9-question survey. The respondents correctly answered only 57% of the questions. In contrast, the mean confidence exhibited by the respondents was 84%, which was significantly greater than their accuracy. Of the 1629 respondents, 1496 (92%) were overconfident. CONCLUSIONS: The anesthesiologists surveyed expressed overconfidence in their knowledge and ability to manage the use of NMBDs. This overconfidence may be partially responsible for the failure to adopt routine perioperative neuromuscular monitoring. When clinicians are highly confident in their knowledge about a procedure, they are less likely to modify their clinical practice or seek further guidance on its use. © 2019 International Anesthesia Research Society.
Desire paths for workplace assessment in postgraduate anaesthesia training: analysing informal processes to inform assessment redesign
Background: In postgraduate specialist training, workplace assessments are expected to provide the information required for decisions on trainee progression. Research suggests that meeting this expectation can be difficult in practice, which has led to the development of informal processes, or ‘shadow systems’ of assessment. Rather than rejecting these informal approaches to workplace assessment, we propose borrowing from sociology the concept of ‘desire paths’ to legitimise and strengthen these well-trodden approaches. We asked what information about trainees is currently used or desired by those charged with making decisions on trainee progression, and how is it obtained? Methods: We undertook a qualitative study with thematic analysis of semi-structured interviews of supervisors of training across Australia and New Zealand. Results: From 21 interviews, we identified four interrelated themes, the first being the local context of training sites. The other three themes represent dilemmas in the desire for authentic and representative information about the trainee: 1) how the process of gathering and documenting information can filter, transform, or limit the original message; 2) deciding when possible trainee deviation from performance norms warrants a closer look; and 3) how transparent vs covert information gathering affects the information supervisors will provide, and how control over assessment is distributed between trainee and supervisor. Conclusion: From these themes, we propose a set of design principles for future workplace assessment. Understanding the reasons desire paths exist can inform future assessment redesign, and may address the current disjunct between the formal workplace assessment system and what happens in practice. © 2022 The Authors
Automated responsiveness monitor to titrate propofol sedation
BACKGROUND: In previous studies, we showed that failure to respond to automated responsiveness monitor (ARM) precedes potentially serious sedationrelated adversities associated with loss of responsiveness, and that the ARM was not susceptible to false-positive responses. It remains unknown, however, whether loss and return of response to the ARM occur at similar sedation levels. We hypothesized that loss and return of response to the ARM occur at similar sedation levels in individual subjects, independent of the propofol effect titration scheme. METHODS: Twenty-one healthy volunteers aged 20-45 yr underwent propofol sedation using an effect-site target-controlled infusion system and two different dosing protocol schemes. In all, we increased propofol effect-site concentration (Ce) until loss of response to the ARM occurred. Subsequently, the propofol Ce was decreased either by a fixed percentage (20%, 30%, 40%, 50%, 60%, and 70%; fixed percentage protocol, n = 10) or by a linear deramping (0.1, 0.2, and 0.3 μg · mL-1 · min-1; deramping protocol, n = 11) until the ARM response returned. Consequently, the propofol Ce was maintained at the new target for a 6-min interval (Ce plateau) during which arterial samples for propofol determination were obtained, and a clinical assessment of sedation (Observer's Assessment of Alertness/Sedation [OAA/S] score) performed. Each participant in the two protocols experienced each percentage or deramping rate of Ce decrease in random order. The assumption of steady state was tested by plotting the limits of agreement between the starting and ending plasma concentration (Cp) at each Ce plateau. The probability of response to the ARM as a function of propofol Ce, Bispectral Index (BIS) of the electroencephalogram, and OAA/S score was estimated, whereas the effect of the protocol type on these estimates was evaluated using the nested model approach (NONMEM). The combined effect of propofol Ce and BIS on the probability for ARM response was also evaluated using a fractional probability model (PBIS/Ce). RESULTS: The measured propofol Cp at the beginning and the end of the Ce plateau was almost identical. The Ce 50 of propofol for responding to the ARM was 1.73 (95% confidence interval: 1.55-2.10) μg/mL, whereas the corresponding BIS50 was 75 (71.3-77). The OAA/S50 probability for ARM response was 12.5/20 (12-13.4). A fractional probability (PBIS/Ce) model for the combined effect of BIS and Ce fitted the data best, with an estimated contribution for BIS of 63%. Loss and return of ARM response occurred at similar sedation levels in individual subjects. CONCLUSIONS: Reproducible ARM dynamics in individual subjects compares favorably with clinical and electroencephalogram sedation end points and suggests that the ARM could be used as an independent instrumental guide of drug effect during propofol-only sedation. Copyright © 2009 International Anesthesia Research Society.
Decreased parasympathetic activity of heart rate variability during anticipation of night duty in anesthesiology residents
BACKGROUND: In residency programs, it is well known that autonomic regulation is influenced by night duty due to workload stress and sleep deprivation. A less investigated question is the impact on the autonomic nervous system of residents before or when anticipating a night duty shift. In this study, heart rate variability (HRV) was evaluated as a measure of autonomic nervous system regulation. METHODS: Eight residents in the Department of Anesthesiology were recruited, and 5 minutes of electrocardiography were recorded under 3 different conditions: (1) the morning of a regular work day (baseline); (2) the morning before a night duty shift (anticipating the night duty); and (3) the morning after a night duty shift. HRV parameters in the time and frequency domains were calculated. Repeated measures analysis of variance was performed to compare the HRV parameters among the 3 conditions. RESULTS: There was a significant decrease of parasympathetic-related HRV measurements (high-frequency power and root mean square of the standard deviation of R-R intervals) in the morning before night duty compared with the regular work day. The mean difference of highfrequency power between the 2 groups was 80.2 ms2 (95% confidence interval, 14.5-146) and that of root mean square of the standard deviation of R-R intervals was 26 milliseconds (95% confidence interval, 7.2-44.8), with P = .016 and .007, respectively. These results suggest that the decrease of parasympathetic activity is associated with stress related to the condition of anticipating the night duty work. On the other hand, the HRV parameters in the morning after duty were not different from the regular workday. CONCLUSIONS: The stress of anticipating the night duty work may affect regulation of the autonomic nervous system, mainly manifested as a decrease in parasympathetic activity. The effect of this change on the health of medical personnel deserves our concern. ©2017 International Anesthesia Research Society.
Emergency Department Airway Management Responsibilities in the United States
BACKGROUND: In the 1990s, emergency medicine (EM) physicians were responsible for intubating about half of the patients requiring airway management in emergency rooms. Since then, no studies have characterized the airway management responsibilities in the emergency room. METHODS: A survey was sent via the Eastern Association for Surgery and Trauma and the Trauma Anesthesiology Society listservs, as well as by direct solicitation. Information was collected on trauma center level, geographical location, department responsible for intubation in the emergency room, department responsible for intubation in the trauma bay, whether these roles differed for pediatrics, whether an anesthesiologist was available “in-house” 24 hours a day, and whether there was a protocol for anesthesiologists to assist as backup during intubations. Responses were collected, reviewed, linked by city, and mapped using Python. RESULTS: The majority of the responses came from the Eastern Association for Surgery of Trauma (84.6%). Of the respondents, 72.6% were from level-1 trauma centers, and most were located in the eastern half of the United States. In the emergency room, EM physicians were primarily responsible for intubations at 81% of the surveyed institutions. In trauma bays, EM physicians were primarily responsible for 61.4% of intubations. There did not appear to be a geographical pattern for personnel responsible for managing the airway at the institutions surveyed. CONCLUSIONS: The majority of institutions have EM physicians managing their airways in both emergency rooms and trauma bays. This may support the observations of an increased percentage of airway management in the emergency room and trauma bay setting by EM physicians compared to 20 years ago. Copyright © 2018 International Anesthesia Research Society
Nighttime extubation does not increase risk of reintubation, length of stay, or mortality: Experience of a large, urban, teaching hospital
BACKGROUND: In the intensive care unit (ICU), extubation failure has been associated with greater resource utilization and worsened clinical outcomes. Most recently, nighttime extubation (NTE) has been reported as a risk factor for increased ICU and hospital mortality. We hypothesized that, in a large, urban, university-affiliated hospital with multidisciplinary assessment for extubation, rigorously protocolized extubation algorithms, and expert airway managers available at all times of day for assessment of high-risk extubations, NTE would not confer additional risk of adverse clinical outcomes. METHODS: This was a retrospective cohort study of mechanically ventilated adults at a single university-affiliated hospital. NTE was defined as occurring between 7:00 pm and 6:59 am the following day. All data were extracted from the institution's electronic medical record. Multivariable regression analyses were used to assess associations between NTE and reintubation, ICU and hospital length of stay (LOS), and mortality with adjustments for demographic and clinical covariates defined a priori. Palliative, unplanned, and routine postoperative extubations were excluded in sensitivity analyses. RESULTS: Of 2241 patients, 204 of 2241 (9.1%) underwent NTE. The rates of reintubation (NTE 6.9% versus daytime extubation [DTE] 12.4%; adjusted odds ratio [95% confidence interval {CI}], 0.78 [0.43-1.41]; P =.41) and in-hospital mortality (NTE 3.4% versus DTE 5.9%; adjusted odds ratio [95% CI], 0.72 [0.28-1.84]; P =.49) were not found to differ. NTE, compared to DTE, was associated with shorter duration of mechanical ventilation (median [interquartile range], 1 [0-1] days vs 2 [1-4] days; adjusted ratio of geometric means [RGMs] [95% CI], 0.64 [0.54-0.70]; P <.001), ICU (2 [1-5] days vs 4 [2-10] days; adjusted RGMs [95% CI], 0.65 [0.57-0.75]; P <.001), and hospital LOS (6 [3-18] days vs 13 [6-25] days; adjusted RGMs [95% CI], 0.64 [0.56-0.74]; P <.001). These results were unchanged in sensitivity analyses. CONCLUSIONS: Patients who underwent NTE were not at increased risk of reintubation or in-hospital mortality. In addition, NTE was associated with a shortened duration of mechanical ventilation and hospital LOS. In health care systems with similar critical care delivery models, NTE may coincide with reduced resource utilization in appropriately selected patients. © 2020 Royal Society of Chemistry. All rights reserved.
Five-year follow-up on the work force and finances of United States anesthesiology training programs: 2000 to 2005
BACKGROUND: In the middle 1990s, there was a decrease in anesthesiology residency class sizes, which contributed to a nationwide shortage of anesthesiologists, resulting in a competitive market with increased salary demands. In 1999, a nationwide survey of the financial status of United States anesthesiology training programs was conducted. Follow-up surveys have been conducted each year thereafter. We present the results of the sixth survey in this series. METHODS: Surveys were distributed by e-mail to the anesthesiology department chairs of the United States Training Programs. Responses were also received by e-mail. RESULTS: One hundred twenty-one departments were surveyed with a response rate of 60%. The 87% of departments seeking at least one additional faculty had an average of 2.8 faculty open positions (5.5% open positions overall which is down from 9.7% in 2000). Of the 96% of departments that employ certified registered nurse anesthetists (CRNAs) 89% were seeking additional CRNAs, averaging 3.6 open positions. The average department received $4.9 million (or $116,000/faculty) in institutional support. When the portion of this support allocated for CRNA salaries was removed, the average department received $4.1 million (or $95,000/faculty) in institutional support. This is a 16% increase over the previous year. Faculty academic time averaged 17% (where 20% is 1 d/wk). Departments billed an average of 11,320 anesthesia units/faculty/yr. Although the average anesthesia unit value collected was $31, departments required approximately $40/U to meet expenses. Medicaid payments averaged $15, ranging from $5 to $30/U. CONCLUSION: These results demonstrate the continuing need for institutional support to keep anesthesiology training departments financially stable. © 2007 by International Anesthesia Research Society.
Default drug doses in anesthesia information management systems
BACKGROUND: In the United States, anesthesia information management systems (AIMS) are well established, especially within academic practices. Many hospitals are replacing their stand-alone AIMS during migration to an enterprise-wide electronic health record. This presents an opportunity to review choices made during the original implementation, based on actual usage. One area amenable to this informatics approach is the configuration in the AIMS of quick buttons for typical drug doses. The use of such short cuts, as opposed to manual typing of doses, simplifies and may improve the accuracy of drug documentation within the AIMS. We analyzed administration data from 3 different institutions, 2 of which had empirically configured default doses, and one in which defaults had not been set up. Our first hypothesis was that most (ie, &gt;50%) of drugs would need at least one change to the existing defaults. Our second hypothesis was that for most (&gt;50%) drugs, the 4 most common doses at the site lacking defaults would be included among the most common doses at the 2 sites with defaults. If true, this would suggest that having default doses did not affect the typical administration behavior of providers. METHODS: The frequency distribution of doses for all drugs was determined, and the 4 most common doses representing at least 5% of total administrations for each drug were identified. The appropriateness of the current defaults was determined by the number of changes (0-4) required to match actual usage at the 2 hospitals with defaults. At the institution without defaults, the most frequent doses for the 20 most commonly administered drugs were compared with the default doses at the other institutions. RESULTS: At the 2 institutions with defaults, 84.7% and 77.5% of drugs required at least 1 change in the default drug doses (P &lt; 10-6 for both compared with 50%), confirming our first hypothesis. At the institution lacking the default drug doses, 100% of the 20 most commonly administered doses (representing ≥5% of use for that drug) were included in the most commonly administered doses at the other 2 institutions (P &lt; 10-6), confirming our second hypothesis. CONCLUSIONS: We recommend that default drug doses should be analyzed when switching to a new AIMS because most drugs needed at least one change. Such analysis is also recommended periodically so that defaults continue to reflect current practice. The use of default dose buttons does not appear to modify the selection of drug doses in clinical practice. © 2017 International Anesthesia Research Society.
Selective local anesthetic placement using ultrasound guidance and neurostimulation for infraclavicular brachial plexus block
Background: In this study, we performed the infraclavicular block with combined ultrasound guidance and neurostimulation to selectively target cords to compare the success rates of placing a single injection of local anesthetic either in a central or peripheral location. Methods: Two hundred eighteen patients were enrolled in a consecutive, prospective study. Patients were randomized to injection of local anesthetic either centrally (posterior cord) or peripherally (medial or lateral cord) using ultrasound guidance and neurostimulation. Supervised senior anesthesiology residents or attending anesthesiologists performed the blocks. Both intent-to-treat and treatment-received analyses were used to compare central and peripheral placement efficacy. Results: The overall success rate was significantly higher for the central placements than peripheral placements (96% vs 85%, P = 0.004). Individual cord success rates were as follows: posterior 99%, lateral 92%, and medial 84% (P = 0.001). The central group required attending physician intervention more frequently (27% vs 6%, P < 0.001). Postoperative pain scores of ≤3 were more likely with central placement (100% vs 94%, P = 0.012). Conclusion: Central placement of a single injection of local anesthetic targeted at the posterior cord resulted in a higher success rate for infraclavicular block. Copyright © 2010 International Anesthesia Research Society.
Multicenter study validating accuracy of a continuous respiratory rate measurement derived from pulse oximetry: A comparison with capnography
Background: Intermittent measurement of respiratory rate via observation is routine in many patient care settings. This approach has several inherent limitations that diminish the clinical utility of these measurements because it is intermittent, susceptible to human error, and requires clinical resources. As an alternative, a software application that derives continuous respiratory rate measurement from a standard pulse oximeter has been developed. We sought to determine the performance characteristics of this new technology by comparison with clinician-reviewed capnography waveforms in both healthy subjects and hospitalized patients in a low-acuity care setting. Methods: Two independent observational studies were conducted to validate the performance of the Medtronic NellcorTM Respiration Rate Software application. One study enrolled 26 healthy volunteer subjects in a clinical laboratory, and a second multicenter study enrolled 53 hospitalized patients. During a 30-minute study period taking place while participants were breathing spontaneously, pulse oximeter and nasal/oral capnography waveforms were collected. Pulse oximeter waveforms were processed to determine respiratory rate via the Medtronic Nellcor Respiration Rate Software. Capnography waveforms reviewed by a clinician were used to determine the reference respiratory rate. Results: A total of 23,243 paired observations between the pulse oximeter-derived respiratory rate and the capnography reference method were collected and examined. The mean referencebased respiratory rate was 15.3 ± 4.3 breaths per minute with a range of 4 to 34 breaths per minute. The Pearson correlation coefficient between the Medtronic Nellcor Respiration Rate Software values and the capnography reference respiratory rate is reported as a linear correlation, R, as 0.92 ± 0.02 (P < .001), whereas Lin's concordance correlation coefficient indicates an overall agreement of 0.85 ± 0.04 (95% confidence interval [CI] +0.76; +0.93) (healthy volunteers: 0.94 ± 0.02 [95% CI +0.91; +0.97]; hospitalized patients: 0.80 ± 0.06 [95% CI +0.68; +0.92]). The mean bias of the Medtronic Nellcor Respiration Rate Software was 0.18 breaths per minute with a precision (SD) of 1.65 breaths per minute (healthy volunteers: 0.37 ± 0.78 [95% limits of agreement: -1.16; +1.90] breaths per minute; hospitalized patients: 0.07 ± 1.99 [95% limits of agreement: -3.84; +3.97] breaths per minute). The root mean square deviation was 1.35 breaths per minute (healthy volunteers: 0.81; hospitalized patients: 1.60). Conclusions: These data demonstrate the performance of the Medtronic Nellcor Respiration Rate Software in healthy subjects and patients hospitalized in a low-acuity care setting when compared with clinician-reviewed capnography. The observed performance of this technology suggests that it may be a useful adjunct to continuous pulse oximetry monitoring by providing continuous respiratory rate measurements. The potential patient safety benefit of using combined continuous pulse oximetry and respiratory rate monitoring warrants assessment. Copyright © 2017 The Author(s). Published by Wolters Kluwer Health, Inc.
A novel skin-traction method is effective for real-time ultrasound-guided internal jugular vein catheterization in infants and neonates weighing less than 5 kilograms
BACKGROUND: Internal jugular vein (IJV) catheterization in pediatric patients is sometimes difficult because of the small sizes of veins and their collapse during catheterization. To facilitate IJV catheterization, we developed a novel skin-traction method (STM), in which the point of puncture of the skin over the IJV is stretched upward with tape during catheterization. In this study, we examined whether the STM increases the cross-sectional area of the vein and thus facilitates catheterization. METHODS: This was a prospective study conducted from December 2006 to June 2008. We enrolled 28 consecutive infants and neonates weighing <5 kg who underwent surgery for congenital heart disease. The patients were randomly assigned to a group in which STM was performed (STM group) or a group in which it was not performed (non-STM group). The cross-sectional area and diameter of the right IJV in the flat position and 10° Trendelenburg position with and without applying STM were measured. We determined time from first skin puncture to the following: (a) first blood back flow, (b) insertion of guidewire, and (c) insertion of catheter. Number of punctures, success rate, complications, and degree of IJV collapse during advancement of the needle (estimated as decrease of anteroposterior diameter during advancement of the needle compared with the diameter before advancement) were also examined. RESULTS: STM significantly increased the cross-sectional area and the anteroposterior diameter of the IJV in both positions. The time required to insert the catheter was significantly shorter in the STM group, probably mainly due to a shorter guidewire insertion time. The degree of IJV collapse during advancement of the needle was much lower in the STM group. CONCLUSIONS: STM facilitates IJV catheterization in infants and neonates weighing <5 kg by enlarging the IJV and preventing vein collapse. Copyright © 2009 International Anesthesia Research Society.
Assessment of Anesthesia Capacity in Public Surgical Hospitals in Guatemala
BACKGROUND: International standards for safe anesthetic care have been developed by the World Federation of Societies of Anaesthesiologists (WFSA) and the World Health Organization (WHO). Whether these standards are met is unknown in many nations, including Guatemala, a country with universal health coverage. We aimed to establish an overview of anesthesia care capacity in public surgical hospitals in Guatemala to help guide public sector health care development. METHODS: In partnership with the Guatemalan Ministry of Public Health and Social Assistance (MSPAS), a national survey of all public hospitals providing surgical care was conducted using the WFSA anesthesia facility assessment tool (AFAT) in 2018. Each facility was assessed for infrastructure, service delivery, workforce, medications, equipment, and monitoring practices. Descriptive statistics were calculated and presented. RESULTS: Of the 46 public hospitals in Guatemala in 2018, 36 (78%) were found to provide surgical care, including 20 district, 14 regional, and 2 national referral hospitals. We identified 573 full-time physician surgeons, anesthesiologists, and obstetricians (SAO) in the public sector, with an estimated SAO density of 3.3/100,000 population. There were 300 full-time anesthesia providers working at public hospitals. Physician anesthesiologists made up 47% of these providers, with an estimated physician anesthesiologist density of 0.8/100,000 population. Only 10% of district hospitals reported having an anesthesia provider continuously present intraoperatively during general or neuraxial anesthesia cases. No hospitals reported assessing pain in the immediate postoperative period. While the availability of some medications such as benzodiazepines and local anesthetics was robust (100% availability across all hospitals), not all hospitals had essential medications such as ketamine, epinephrine, or atropine. There were deficiencies in the availability of essential equipment and basic intraoperative monitors, such as end-tidal carbon dioxide detectors (17% availability across all hospitals). Postoperative care and access to resuscitative equipment, such as defibrillators, were also lacking. CONCLUSIONS: This first countrywide, MSPAS-led assessment of anesthesia capacity at public facilities in Guatemala revealed a lack of essential materials and personnel to provide safe anesthesia and surgery. Hospitals surveyed often did not have resources regardless of hospital size or level, which may suggest multiple factors preventing availability and use. Local and national policy initiatives are needed to address these deficiencies. © 2020 International Anesthesia Research Society.
Effect of Hypotension Prediction Index-guided intraoperative haemodynamic care on depth and duration of postoperative hypotension: a sub-study of the Hypotension Prediction trial
Background: Intraoperative and postoperative hypotension are associated with morbidity and mortality. The Hypotension Prediction (HYPE) trial showed that the Hypotension Prediction Index (HPI) reduced the depth and duration of intraoperative hypotension (IOH), without excess use of intravenous fluid, vasopressor, and/or inotropic therapies. We hypothesised that intraoperative HPI-guided haemodynamic care would reduce the severity of postoperative hypotension in the PACU. Methods: This was a sub-study of the HYPE study, in which 60 adults undergoing elective noncardiac surgery were allocated randomly to intraoperative HPI-guided or standard haemodynamic care. Blood pressure was measured using a radial intra-arterial catheter, which was connected to a FloTracIQ sensor. Hypotension was defined as MAP <65 mm Hg, and a hypotensive event was defined as MAP <65 mm Hg for at least 1 min. The primary outcome was the time-weighted average (TWA) of postoperative hypotension. Secondary outcomes were absolute incidence, area under threshold for hypotension, and percentage of time spent with MAP <65 mm Hg. Results: Overall, 54/60 (90%) subjects (age 64 (8) yr; 44% female) completed the protocol, owing to failure of the FloTracIQ device in 6/60 (10%) patients. Intraoperative HPI-guided care was used in 28 subjects; 26 subjects were randomised to the control group. Postoperative hypotension occurred in 37/54 (68%) subjects. HPI-guided care did not reduce the median duration (TWA) of postoperative hypotension (adjusted median difference, vs standard of care: 0.118; 95% confidence interval [CI], 0–0.332; P=0.112). HPI-guidance reduced the percentage of time with MAP <65 mm Hg by 4.9% (adjusted median difference: –4.9; 95% CI, –11.7 to –0.01; P=0.046). Conclusions: Intraoperative HPI-guided haemodynamic care did not reduce the TWA of postoperative hypotension. © 2021 The Author(s)
Management of anesthesia equipment failure: A simulation-based resident skill assessment
BACKGROUND: Intraoperative anesthesia equipment failures are a cause of anesthetic morbidity. Our purpose in this study was 1) to design a set of simulated scenarios that measure skill in managing intraoperative equipment-related errors and 2) to evaluate the reliability and validity of the measures from this multiple scenario assessment. METHODS: Eight intraoperative scenarios were created to test anesthesia residents' skills in managing a number of equipment-related failures. Fifty-six resident physicians, divided into four groups based on their training year (Resident 1-Resident 4), participated in the individual simulation-based assessment of equipment-related failures. The score for each scenario was generated by a checklist of key actions relevant to each scenario and time to complete these actions. RESULTS: The residents' scores, on average, improved with increased level of training. The more senior residents (R3 and R4) performed better than more junior residents (R1 and R2). Despite similar training background, there was a wide range of skill among the residents within each training year. The summary score on the eight scenario assessments, measured by either the key actions or the time required to manage the events, yielded a reliable estimate of a resident's skill in managing these simulated equipment failures. DISCUSSION: Anesthesia residents' performances could be reliably evaluated using a set of simulated intraoperative equipment problems. This multiple scenario assessment was an effective method to evaluate individual performance. The summary results, by training year, could be used to determine how successful current instructional methods are for acquiring skill. Copyright © 2009 International Anesthesia Research Society.
A novel classification instrument for intraoperative awareness events
Background: Intraoperative awareness with explicit recall occurs in approximately 1-2 cases per 1000. Given the rarity of the event, a better understanding of awareness and its sequelae will likely require the compilation of data from numerous studies. As such, a standard description and expression of awareness events would be of value. Methods: We developed a novel classification instrument for intraoperative awareness events: Class 0: no awareness; Class 1: isolated auditory perceptions; Class 2: tactile perceptions (e.g., surgical manipulation or endotracheal tube); Class 3: pain; Class 4: paralysis (e.g., feeling one cannot move, speak, or breathe); and Class 5: paralysis and pain. An additional designation of "D" for distress was also included for patient reports of fear, anxiety, suffocation, sense of doom, sense of impending death, or other explicit descriptions. We reviewed 15 studies of the incidence of awareness that provided specific information about awareness reports. Five anesthesiologists at three institutions who developed the categories independently classified the events. An additional 20 individuals (attending anesthesiologists, anesthesiology residents, nurse anesthetists, medical students, and ancillary staff) not involved in the development of the categories also independently classified the events. Fleiss's kappa statistic was used to evaluate inter-observer agreement. Results: One hundred fifty-one cases of intraoperative awareness in adults were identified as valid for analysis. The overall kappa value was 0.851 (0.847-0.856, 95% confidence interval) for the basic Classes 1-5. Including additional designations of emotional distress, the overall kappa value was 0.779 (0.776-0.783, 95% confidence interval). Conclusion: We report a novel classification instrument for intraoperative awareness events that has excellent inter-observer agreement and that may facilitate the study of intraoperative awareness. Copyright © 2010 International Anesthesia Research Society.
A randomized trial of continuous noninvasive blood pressure monitoring during noncardiac surgery
BACKGROUND: Intraoperative hypotension is associated with postoperative mortality. Early detection of hypotension by continuous hemodynamic monitoring might prompt timely therapy, thereby reducing intraoperative hypotension. We tested the hypothesis that continuous noninvasive blood pressure monitoring reduces intraoperative hypotension. METHODS: Patients ≥45 years old with American Society of Anesthesiologists physical status III or IV having moderate-to-high-risk noncardiac surgery with general anesthesia were included. All participating patients had continuous noninvasive hemodynamic monitoring using a finger cuff (ClearSight, Edwards Lifesciences, Irvine, CA) and a standard oscillometric cuff. In half the patients, randomly assigned, clinicians were blinded to the continuous values, whereas the others (unblinded) had access to continuous blood pressure readings. Continuous pressures in both groups were used for analysis. Time-weighted average for mean arterial pressure <65 mm Hg was compared using 2-sample Wilcoxon rank-sum tests and Hodges Lehmann estimation of location shift with corresponding asymptotic 95% CI. RESULTS: Among 320 randomized patients, 316 were included in the intention-to-treat analysis. With 158 patients in each group, those assigned to continuous blood pressure monitoring had significantly lower time-weighted average mean arterial pressure <65 mm Hg, 0.05 [0.00, 0.22] mm Hg, versus intermittent blood pressure monitoring, 0.11 [0.00, 0.54] mm Hg (P = .039, significance criteria P < .048). CONCLUSIONS: Continuous noninvasive hemodynamic monitoring nearly halved the amount of intraoperative hypotension. Hypotension reduction with continuous monitoring, while statistically significant, is currently of uncertain clinical importance. Copyright © 2018 The Author(s). Published by Wolters Kluwer Health, Inc.
Development and Evaluation of a Risk-Adjusted Measure of Intraoperative Hypotension in Patients Having Nonemergent, Noncardiac Surgery
BACKGROUND: Intraoperative hypotension is common and associated with organ injury and death, although randomized data showing a causal relationship remain sparse. A risk-adjusted measure of intraoperative hypotension may therefore contribute to quality improvement efforts. METHODS: The measure we developed defines hypotension as a mean arterial pressure <65 mm Hg sustained for at least 15 cumulative minutes. Comparisons are based on whether clinicians have more or fewer cases of hypotension than expected over 12 months, given their patient mix. The measure was developed and evaluated with data from 225,389 surgeries in 5 hospitals. We assessed discrimination and calibration of the risk adjustment model, then calculated the distribution of clinician-level measure scores, and finally estimated the signal-to-noise reliability and predictive validity of the measure. RESULTS: The risk adjustment model showed acceptable calibration and discrimination (area under the curve was 0.72 and 0.73 in different validation samples). Clinician-level, risk-adjusted scores varied widely, and 36% of clinicians had significantly more cases of intraoperative hypotension than predicted. Clinician-level score distributions differed across hospitals, indicating substantial hospital-level variation. The mean signal-to-noise reliability estimate was 0.87 among all clinicians and 0.94 among clinicians with >30 cases during the 12-month measurement period. Kidney injury and in-hospital mortality were most common in patients whose anesthesia providers had worse scores. However, a sensitivity analysis in 1 hospital showed that score distributions differed markedly between anesthesiology fellows and attending anesthesiologists or certified registered nurse anesthetists; score distributions also varied as a function of the fraction of cases that were inpatients. CONCLUSIONS: Intraoperative hypotension was common and was associated with acute kidney injury and in-hospital mortality. There were substantial variations in clinician-level scores, and the measure score distribution suggests that there may be opportunity to reduce hypotension which may improve patient safety and outcomes. However, sensitivity analyses suggest that some portion of the variation results from limitations of risk adjustment. Future versions of the measure should risk adjust for important patient and procedural factors including comorbidities and surgical complexity, although this will require more consistent structured data capture in anesthesia information management systems. Including structured data on additional risk factors may improve hypotension risk prediction which is integral to the measure's validity. © 2021 Lippincott Williams and Wilkins. All rights reserved.
Multiple reservoirs contribute to intraoperative bacterial transmission
Background: Intraoperative stopcock contamination is a frequent event associated with increased patient mortality. In the current study we examined the relative contributions of anesthesia provider hands, the patient, and the patient environment to stopcock contamination. Our secondary aims were to identify risk factors for stopcock contamination and to examine the prior association of stopcock contamination with 30-day postoperative infection and mortality. Additional microbiological analyses were completed to determine the prevalence of bacterial pathogens within intraoperative bacterial reservoirs. Pulsed-field gel electrophoresis was used to assess the contribution of reservoir bacterial pathogens to 30-day postoperative infections. Methods: In a multicenter study, stopcock transmission events were observed in 274 operating rooms, with the first and second cases of the day in each operating room studied in series to identify within- and between-case transmission events. Reservoir bacterial cultures were obtained and compared with stopcock set isolates to determine the origin of stopcock contamination. Between-case transmission was defined by the isolation of 1 or more bacterial isolates from the stopcock set of a subsequent case (case 2) that were identical to reservoir isolates from the preceding case (case 1). Within-case transmission was defined by the isolation of 1 or more bacterial isolates from a stopcock set that were identical to bacterial reservoirs from the same case. Bacterial pathogens within these reservoirs were identified, and their potential contribution to postoperative infections was evaluated. All patients were followed for 30 days postoperatively for the development of infection and all-cause mortality. Results: Stopcock contamination was detected in 23% (126 out of 548) of cases with 14 between-case and 30 within-case transmission events confirmed. All 3 reservoirs contributed to between-case (64% environment, 14% patient, and 21% provider) and within-case (47% environment, 23% patient, and 30% provider) stopcock transmission. The environment was a more likely source of stopcock contamination than provider hands (relative risk [RR] 1.91, confidence interval [CI] 1.09 to 3.35, P = 0.029) or patients (RR 2.56, CI 1.34 to 4.89, P = 0.002). Hospital site (odds ratio [OR] 5.09, CI 2.02 to 12.86, P = 0.001) and case 2 (OR 6.82, CI 4.03 to 11.5, P < 0.001) were significant predictors of stopcock contamination. Stopcock contamination was associated with increased mortality (OR 58.5, CI 2.32 to 1477, P = 0.014). Intraoperative bacterial contamination of patients and provider hands was linked to 30-day postoperative infections. Conclusions: Bacterial contamination of patients, provider hands, and the environment contributes to stopcock transmission events, but the surrounding patient environment is the most likely source. Stopcock contamination is associated with increased patient mortality. Patient and provider bacterial reservoirs contribute to 30-day postoperative infections. Multimodal programs designed to target each of these reservoirs in parallel should be studied intensely as a comprehensive approach to reducing intraoperative bacterial transmission. Copyright © 2012 International Anesthesia Research Society.
Nasogastric tube insertion using different techniques in anesthetized patients: A prospective, randomized study
BACKGROUND: It is often difficult to correctly place nasogastric (NG) tubes under anesthesia. We hypothesized that simple modifications in technique of NG tube insertion will improve the success rate. METHODS: Two hundred patients were enrolled into the study. The patients were randomized into four groups: control, guidewire, slit endotracheal tube, and neck flexion with lateral neck pressure. The starting point of the procedure was the time when NG tube insertion was begun through the selected nostril. The end point was the time when there was either a successful insertion of the NG tube or a failure after two attempts. The success rate of the technique, duration of insertion procedure, and the occurrence of complications (bleeding, coiling, kinking, and knotting, etc.) were noted. x2, analysis of variance, and Student's t-test were used to analyze the data. RESULTS: Success rates were higher in all intervention groups compared with the control group. The time necessary to insert the NG tube was significantly longer in the slit endotracheal tube group. Kinking of the NG tube and bleeding were the most common complications. CONCLUSION: The success rate of NG tube insertion can be increased by using a ureteral guidewire as stylet, a slit endotracheal tube as an introducer, or head flexion with lateral neck pressure. Head flexion with lateral neck pressure is the easiest technique that has a high success rate and fewest complications. Copyright © 2009 International Anesthesia Research Society.
Clinical performance scores are independently associated with the American board of anesthesiology certification examination scores
BACKGROUND: It is unknown whether clinical performance during residency is related to the American Board of Anesthesiology (ABA) oral examination scores. We hypothesized that resident clinical performance would be independently associated with oral examination performance because the oral examination is designed to test for clinical judgment. METHOD: We determined clinical performance scores (Z rel) during the final year of residency for all 124 Massachusetts General Hospital (MGH) anesthesia residents who graduated from 2009 to 2013. One hundred eleven graduates subsequently took the ABA written and oral examinations. We standardized each graduate's written examination score (Z Part 1) and oral examination score (Z Part 2) to the national average. Multiple linear regression analysis was used to determine the partial effects of MGH clinical performance scores and ABA written examination scores on ABA oral examination scores. RESULTS: MGH clinical performance scores (Z rel) correlated with both ABA written examination scores (Z Part 1) (r = 0.27; P = 0.0047) and with ABA oral examination scores (Z Part 2) (r = 0.33; P = 0.0005). ABA written examination scores (Z Part 1) correlated with oral examination scores (Z Part 2) (r = 0.46; P = 0.0001). Clinical performance scores (Z rel) and ABA written examination scores (Z Part 1) independently accounted for 4.5% (95% confidence interval [CI], 0.5%-12.4%; P = 0.012) and 20.8% (95% CI, 8.0%-37.2%; P < 0.0001), respectively, of the variance in ABA oral examination scores (Z Part 2). CONCLUSIONS: Clinical performance scores and ABA written examination scores independently accounted for variance in ABA oral examination scores. Clinical performance scores are independently associated with the ABA oral examination scores. © 2016 International Anesthesia Research Society.
The association between timing of routine preoperative blood testing and a composite of 30-day postoperative morbidity and mortality
BACKGROUND: Laboratory testing is a common component of preanesthesia evaluation and is designed to identify medical abnormalities that might otherwise remain undetected. While blood testing might optimally be performed shortly before surgery, it is often done earlier for practical reasons. We tested the hypothesis that longer periods between preoperative laboratory testing and surgery are associated with increased odds of having a composite of 30-day morbidity and mortality. METHODS: We obtained preoperative data from 2,320,920 patients in the American College of Surgeons National Surgical Quality Improvement Program who were treated between 2005 and 2012. Our analysis was restricted to relatively healthy patients with American Society of Anesthesiology physical status I-II who had elective surgery and normal blood test results (n = 235,010). The primary relationship of interest was the odds of 30-day morbidity and mortality as a function of delay between preoperative testing and surgery. A multivariable logistic regression model was used for the 10 pairwise comparisons among the 5 laboratory timing groups (laboratory blood tests within 1 week of surgery; 1-2 weeks; 2-4 weeks; 1-2 months; and 2-3 months) on 30-day morbidity, adjusting for any imbalanced baseline covariables and type of surgery. RESULTS: A total of 4082 patients (1.74%) had at least one of the component morbidities or died within 30-days after surgery. The observed incidence (unadjusted) was 1.7% when the most recent laboratory blood tests measured within 1 week of surgery, 1.7% when it was within 1-2 weeks, 1.8% when it was within 2-4 weeks, 1.7% when it was between 1 and 2 months, and 2.0% for patients with most recent laboratory blood tests measured 2-3 months before surgery. None of the values within 2 months differed significantly: estimated odds ratios for patients within blood tested within 1 week were 1.00 (99.5% confidence interval, 0.89-1.12) as compared to 1-2 weeks, 0.88 (0.77-1.00) for 2-4 weeks, and 0.95 (0.79-1.14) for 1-2 months, respectively. The estimated odds ratio comparing 1-2 weeks to each of 2-4 weeks and 1-2 months were 0.88 (0.76-1.03) and 0.95 (0.78-1.16), respectively. Blood testing 2-3 months before surgery was associated with increased odds of outcome compared to patients whose most recent test was within 1 week (P = .002) and 1-2 weeks of the date of surgery. CONCLUSIONS: In American Society of Anesthesiologists physical status I and II patients, risk of 30-day morbidity and mortality was not different with blood testing up to 2 months before surgery, suggesting that it is unnecessary to retest patients shortly before surgery. © 2018 International Anesthesia Research Society.
Qualities of Effective Vital Anaesthesia Simulation Training Facilitators Delivering Simulation-Based Education in Resource-Limited Settings
BACKGROUND: Lack of access to safe and affordable anesthesia and surgical care is a major contributor to avoidable death and disability across the globe. Effective education initiatives are a viable mechanism to address critical skill and process gaps in perioperative teams. Vital Anaesthesia Simulation Training (VAST) aims to overcome barriers limiting widespread application of simulation-based education (SBE) in resource-limited environments, providing immersive, low-cost, multidisciplinary SBE and simulation facilitator training. There is a dearth of knowledge regarding the factors supporting effective simulation facilitation in resource-limited environments. Frameworks evaluating simulation facilitation in high-income countries (HICs) are unlikely to fully assess the range of skills required by simulation facilitators working in resource-limited environments. This study explores the qualities of effective VAST facilitators; knowledge gained will inform the design of a framework for assessing simulation facilitators working in resource-limited contexts and promote more effective simulation faculty development. METHODS: This qualitative study used in-depth interviews to explore VAST facilitators' perspectives on attributes and practices of effective simulation in resource-limited settings. Twenty VAST facilitators were purposively sampled and consented to be interviewed. They represented 6 low- and middle-income countries (LMICs) and 3 HICs. Interviews were conducted using a semistructured interview guide. Data analysis involved open coding to inductively identify themes using labels taken from the words of study participants and those from the relevant literature. RESULTS: Emergent themes centered on 4 categories: Persona, Principles, Performance and Progression. Effective VAST facilitators embody a set of traits, style, and personal attributes (Persona) and adhere to certain Principles to optimize the simulation environment, maximize learning, and enable effective VAST Course delivery. Performance describes specific practices that well-trained facilitators demonstrate while delivering VAST courses. Finally, to advance toward competency, facilitators must seek opportunities for skill Progression.Interwoven across categories was the finding that effective VAST facilitators must be cognizant of how context, culture, and language may impact delivery of SBE. The complexity of VAST Course delivery requires that facilitators have a sensitive approach and be flexible, adaptable, and open-minded. To progress toward competency, facilitators must be open to self-reflection, be mentored, and have opportunities for practice. CONCLUSIONS: The results from this study will help to develop a simulation facilitator evaluation tool that incorporates cultural sensitivity, flexibility, and a participant-focused educational model, with broad relevance across varied resource-limited environments. Copyright © 2021 International Anesthesia Research Society.
Availability of lipid emulsion in united states obstetric units
BACKGROUND: Lipid emulsion is recommended in the guidelines for the management of local anesthetic systemic toxicity. In this study, we sought to identify the current level of lipid emulsion availability in U.S. obstetric units. METHODS:: A survey was developed addressing lipid emulsion availability and sent to U.S. obstetric anesthesia directors in June 2011. Univariate statistics were used. RESULTS:: The response rate was 69%. Lipid emulsion was available in 88% of the units (95% confidence interval, 73%-94%). At least 95% of respondents had lipid emulsion available in <30 minutes (100% of n = 68). CONCLUSIONS:: U.S. academic obstetric anesthesia units are equipped to administer lipid emulsion in the setting of local anesthetic systemic toxicity. Copyright © 2013 International Anesthesia Research Society.
The Epidemiology of Staphylococcus aureus Transmission in the Anesthesia Work Area
BACKGROUND: Little is known regarding the epidemiology of intraoperative Staphylococcus aureus transmission. The primary aim of this study was to examine the mode of transmission, reservoir of origin, transmission locations, and antibiotic susceptibility for frequently encountered S aureus strains (phenotypes) in the anesthesia work area. Our secondary aims were to examine phenotypic associations with 30-day postoperative patient cultures, phenotypic growth rates, and risk factors for phenotypic isolation. METHODS: S aureus isolates previously identified as possible intraoperative bacterial transmission events by class of pathogen, temporal association, and analytical profile indexing were subjected to antibiotic disk diffusion sensitivity. The combination of these techniques was then used to confirm S aureus transmission events and to classify them as occurring within or between operative cases (mode). The origin of S aureus transmission events was determined via use of a previously validated experimental model and links to 30-day postoperative patient cultures confirmed via pulsed-field gel electrophoresis. Growth rates were assessed via time-to-positivity analysis, and risk factors for isolation were characterized via logistic regression. RESULTS: One hundred seventy S aureus isolates previously implicated as possible intraoperative transmission events were further subdivided by analytical profile indexing phenotype. Two phenotypes, phenotype P (patients) and phenotype H (hands), accounted for 65% of isolates. Phenotype P and phenotype H contributed to at least 1 confirmed transmission event in 39% and 28% of cases, respectively. Patient skin surfaces (odds ratio [OR], 8.40; 95% confidence interval [CI], 2.30-30.73) and environmental (OR, 10.89; 95% CI, 1.29-92.13) samples were more likely than provider hands (referent) to have phenotype P positivity. Phenotype P was more likely than phenotype H to be resistant to methicillin (OR, 4.38; 95% CI, 1.59-12.06; P = 0.004) and to be linked to 30-day postoperative patient cultures (risk ratio, 36.63 [risk difference, 0.174; 95% CI, 0.019-0.328]; P < 0.001). Phenotype P exhibited a faster growth rate for methicillin resistant and for methicillin susceptible than phenotype H (phenotype P: median, 10.32H; interquartile range, 10.08-10.56; phenotype H: median, 10.56H; interquartile range, 10.32-10.8; P = 0.012). Risk factors for isolation of phenotype P included age (OR, 14.11; 95% CI, 3.12-63.5; P = 0.001) and patient exposure to the hospital ward (OR, 41.11; 95% CI, 5.30-318.78; P < 0.001). CONCLUSIONS: Two S aureus phenotypes are frequently transmitted in the anesthesia work area. A patient and environmentally derived phenotype is associated with increased risk of antibiotic resistance and links to 30-day postoperative patient cultures as compared with a provider hand-derived phenotype. Future work should be directed toward improved screening and decolonization of patients entering the perioperative arena and improved intraoperative environmental cleaning to attenuate postoperative health care-associated infections. © 2015 International Anesthesia Research Society.
A Comparison of Web-Based with Traditional Classroom-Based Training of Lung Ultrasound for the Exclusion of Pneumothorax
BACKGROUND: Lung ultrasound (LUS) is a well-established method that can exclude pneumothorax by demonstration of pleural sliding and the associated ultrasound artifacts. The positive diagnosis of pneumothorax is more difficult to obtain and relies on detection of the edge of a pneumothorax, called the "lung point." Yet, anesthesiologists are not widely taught these techniques, even though their patients are susceptible to pneumothorax either through trauma or as a result of central line placement or regional anesthesia techniques performed near the thorax. In anticipation of an increased training demand for LUS, efficient and scalable teaching methods should be developed. In this study, we compared the improvement in LUS skills after either Web-based or classroom-based training. We hypothesized that Web-based training would not be inferior to "traditional" classroom-based training beyond a noninferiority limit of 10% and that both would be superior to no training. Furthermore, we hypothesized that this short training session would lead to LUS skills that are similar to those of ultrasound-Trained emergency medicine (EM) physicians. METHODS: After a pretest, anesthesiologists from 4 academic teaching hospitals were randomized to Web-based (group Web), classroom-based (group class), or no training (group control) and then completed a posttest. Groups Web and class returned for a retention test 4 weeks later. All 3 tests were similar, testing both practical and theoretical knowledge. EM physicians (group EM) performed the pretest only. Teaching for group class consisted of a standardized PowerPoint lecture conforming to the Consensus Conference on LUS followed by hands-on training. Group Web received a narrated video of the same PowerPoint presentation, followed by an online demonstration of LUS that also instructs the viewer to perform an LUS on himself using a clinically available ultrasound machine and submit smartphone snapshots of the resulting images as part of a portfolio system. Group Web received no other hands-on training. RESULTS: Groups Web, class, control, and EM contained 59, 59, 20, and 42 subjects. After training, overall test results of groups Web and class improved by a mean of 42.9% (±18.1% SD) and 39.2% (±19.2% SD), whereas the score of group control did not improve significantly. The test improvement of group Web was not inferior to group class. The posttest scores of groups Web and class were not significantly different from group EM. In comparison with the posttests, the retention test scores did not change significantly in either group. CONCLUSIONS: When training anesthesiologists to perform LUS for the exclusion of pneumothorax, we found that Web-based training was not inferior to traditional classroom-based training and was effective, leading to test scores that were similar to a group of clinicians experienced in LUS. © Copyright 2016 International Anesthesia Research Society.
Comparison of a novel cadaver model (Fix for Life) with the formalin-fixed cadaver and manikin model for suitability and realism in airway management training
BACKGROUND: Manikins are widely used in airway management training; however, simulation of realism and interpatient variability remains a challenge. We investigated whether cadavers embalmed with the novel Fix for Life (F4L) embalmment method are a suitable and realistic model for teaching 3 basic airway skills: facemask ventilation, tracheal intubation, and laryngeal mask insertion compared to a manikin (SimMan 3G) and formalin-fixed cadavers. METHODS: Thirty anesthesiologists and experienced residents ("operators") were instructed to perform the 3 airway techniques in 10 F4L, 10 formalin-fixed cadavers, and 1 manikin. The order of the model type was randomized per operator. Primary outcomes were the operators' ranking of each model type as a teaching model (total rank), ranking of the model types per technique, and an operator's average verbal rating score for suitability and realism of learning the technique on the model. Secondary outcomes were the percentages of successfully performed procedures per technique and per model (success rates in completing the respective airway maneuvers). For each of the airway techniques, the Friedman analysis of variance was used to compare the 3 models on mean operator ranking and mean verbal rating scores. RESULTS: Twenty-seven of 30 operators (90%) performed all airway techniques on all of the available models, whereas 3 operators performed the majority but not all of the airway maneuvers on all models for logistical reasons. The total number of attempts for each technique was 30 on the manikin, 292 in the F4L, and 282 on the formalin-fixed cadavers. The operators' median total ranking of each model type as a teaching model was 1 for F4L, 2 for the manikin and, 3 for the formalin-fixed cadavers (P < .001). F4L was considered the best model for mask ventilation (P = .029) and had a higher mean verbal rating score for realism in laryngeal mask airway insertion (P = .043). The F4L and manikin did not differ significantly in other scores for suitability and realism. The formalin-fixed cadaver was ranked last and received lowest scores in all procedures (all P < .001). Success rates of the procedures were highest in the manikin. CONCLUSIONS: F4L cadavers were ranked highest for mask ventilation and were considered the most realistic model for training laryngeal mask insertion. Formalin-fixed cadavers are inappropriate for airway management training. © 2018 The Author(s).